CN115129213A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115129213A
CN115129213A CN202210609382.2A CN202210609382A CN115129213A CN 115129213 A CN115129213 A CN 115129213A CN 202210609382 A CN202210609382 A CN 202210609382A CN 115129213 A CN115129213 A CN 115129213A
Authority
CN
China
Prior art keywords
facility
current cell
live
preset
point location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210609382.2A
Other languages
Chinese (zh)
Other versions
CN115129213B (en
Inventor
周晶磊
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruiting Network Technology Shanghai Co ltd
Original Assignee
Ruiting Network Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruiting Network Technology Shanghai Co ltd filed Critical Ruiting Network Technology Shanghai Co ltd
Priority to CN202210609382.2A priority Critical patent/CN115129213B/en
Publication of CN115129213A publication Critical patent/CN115129213A/en
Application granted granted Critical
Publication of CN115129213B publication Critical patent/CN115129213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Abstract

The invention provides a data processing method, a data processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring facility identifications of a plurality of preset directions outside a current cell; respectively determining three-dimensional point location coordinates of each preset position based on the reference position of the current cell; and determining the point location of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset position, and marking the facility identification on the point location to generate a second panoramic model corresponding to the current cell, wherein the facility identification on the second panoramic model is used for linking the facility live-action diagram corresponding to the facility identification. The second panoramic model comprises facility identification of the external facility corresponding to each preset position, the facility panoramic picture corresponding to the facility identification can be directly displayed based on the facility identification, the link relation between the current cell and the external facility is established, the panoramic switching between the cell and the external facility can be realized, and the operation of a user is simplified.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, an application program for finding rooms online can be provided on the electronic device, and a cell panorama can be displayed in a display interface of the application program, so that a user can know the environment of a cell.
In the prior art, when opening a panorama of a certain cell in an application program, the panorama of a current cell may be viewed, but if a user wants to view a surrounding environment of the current cell, for example, wants to view a neighboring cell of the current cell, the user needs to search a name of the neighboring cell of the current cell in an electronic map, then search a name of the neighboring cell in a cell list page of the application program, and enter the panorama of the neighboring cell based on a search result, which is seen in that the user operation is cumbersome and affects the room finding efficiency of the user.
Disclosure of Invention
Embodiments of the present invention provide a data processing method, an apparatus, an electronic device, and a storage medium, so as to implement panoramic switching of facilities around a cell and simplify operations of a user.
According to a first aspect of the embodiments of the present invention, there is provided a data processing method, including:
acquiring facility identifications of a plurality of preset directions outside a current cell;
respectively determining the three-dimensional point location coordinates of each preset position based on the reference position of the current cell;
and determining the point location of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset position, and marking the facility identification on the point location to generate a second panoramic model corresponding to the current cell, wherein the facility identification on the second panoramic model is used for linking the facility real-scene graph corresponding to the facility identification.
According to a second aspect of an embodiment of the present invention, there is provided a data processing apparatus including:
the facility identification acquisition module is used for acquiring facility identifications of a plurality of preset directions outside the current cell;
a point location coordinate determination module, configured to determine, based on the reference position of the current cell, a three-dimensional point location coordinate of each preset position;
and the facility identification marking module is used for determining a point location of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset position, and marking the facility identification on the point location to generate a second panoramic model corresponding to the current cell, wherein the facility identification on the second panoramic model is used for linking the facility real-scene graph corresponding to the facility identification.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the data processing method according to the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the data processing method according to the first aspect.
The data processing method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention respectively determine the three-dimensional point position coordinates of each preset position based on the reference position of the current cell by acquiring the facility identifications of a plurality of preset positions outside the current cell, determining the point location of each preset direction on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset direction, marking facility identification on the point position, generating a second panoramic model corresponding to the current cell, because the second panoramic model comprises the facility identification of the external facility corresponding to each preset position of the current cell, the facility panoramic image of the external facility corresponding to the facility identification can be directly displayed based on the facility identification, the link relation between the current cell and the surrounding facilities is established, the panoramic switching between the cell and the surrounding facilities can be realized, and the operation of a user is simplified.
Drawings
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a relationship between a plurality of predetermined orientations and coordinate axes in an embodiment of the present disclosure;
FIG. 3 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first live-action illustration presentation interface according to an embodiment of the invention;
FIG. 5 is another illustration of the first live-action illustration presentation interface in accordance with an embodiment of the present invention;
fig. 6 is a block diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a data processing method provided in an embodiment of the present invention, where the data processing method may be executed by an electronic device such as a mobile phone or a computer, as shown in fig. 1, the data processing method includes:
step 110, acquiring facility identifications of a plurality of preset positions outside the current cell.
The current cell may be a currently displayed cell or a currently processed cell. The preset directions can be eight preset directions, four preset directions, or other preset directions with corresponding quantity. When the plurality of preset orientations includes four preset orientations, the four preset orientations may be east, south, west, and north; when the plurality of preset orientations includes eight preset orientations, the eight preset orientations may be east, south, west, north, southeast, northeast, southwest, and northwest.
The server can screen out facility identifications corresponding to the external facilities of the cell within a plurality of preset azimuth preset ranges of the current cell by combining with the electronic map, and returns the facility identifications of the external facilities of the cell with the preset azimuth to the electronic equipment such as a mobile phone, a computer and the like; alternatively, the electronic device such as a mobile phone or a computer may be combined with an electronic map to screen out facility identifiers corresponding to facilities outside the cell within a plurality of preset azimuth preset ranges in the current cell. The preset range may be, for example, a range within 500 meters of a straight-line distance determined based on the latitude and longitude of the current cell, or the like. The facilities outside the cell may be a cell outside the cell, a park, an office building, a school, a hospital, etc.
And 120, respectively determining the three-dimensional point location coordinates of each preset position based on the reference position of the current cell.
The reference position may be one of the plurality of preset positions, for example, a north direction may be set as the reference position, and a position of the reference position of the current cell is set in the panoramic model of the current cell in advance.
Based on the reference position of the current cell and the position relation between each preset position and the reference position, the three-dimensional point location coordinates of each preset position can be respectively determined.
In an embodiment of the present invention, the determining three-dimensional point location coordinates of each preset position based on the reference position of the current cell respectively includes: and respectively determining the three-dimensional point location coordinates of each preset position based on the virtual camera rotation angle corresponding to the reference position of the current cell and the included angle between each preset position and the reference position.
FIG. 2 is a schematic diagram illustrating a relationship between a plurality of predetermined orientations and coordinate axes according to an embodiment of the present invention. The preset directions are directions on a horizontal plane, so that Y-axis coordinate values in three-dimensional point location coordinates of each preset direction are 0, at the moment, three-dimensional coordinate axes can be converted into two-dimensional coordinate axes corresponding to an X-axis and a Z-axis, when eight preset directions exist, the relation between the eight preset directions and the coordinate axes can be shown in figure 2, and for each preset direction, only the X-axis coordinate values and the Z-axis coordinate values in the three-dimensional point location coordinates need to be calculated. The initial orientation of the default virtual camera is in the opposite direction of the Z axis, the initial orientation of the virtual camera calculates an X-axis coordinate value and a Z-axis coordinate value of each preset orientation, for example, assuming that the initial rotation angle of the track controller is a, the initial rotation angle is an angle obtained by rotating in the opposite direction of the Z axis, when calculating the three-dimensional point location coordinate of each preset orientation, the three-dimensional point location coordinate of the west direction is (cos a,0, sin a) by calculating on the basis of the included angle between each preset orientation and the X axis, and so on, the three-dimensional point location coordinate (X,0, Z) of each preset orientation is finally obtained.
When the reference position is the north direction, the virtual camera rotation angle is an included angle between the north direction and the Z axis, and when the three-dimensional point location coordinate is calculated, the calculation is performed based on the included angle between each preset position and the X axis, and if the virtual camera rotation angle corresponding to the reference position is currentRotate, the three-dimensional point location coordinate of each preset position is calculated based on the included angle between each preset position and the reference position, and then the three-dimensional point location coordinates of the eight preset positions are calculated as follows:
east: x is cos (currentRotate + π), y is 0, z is sin (currentRotate + π);
south side: x is cos (currentRotate-. pi./2), y is 0, and z is sin (currentRotate-. pi./2);
west: x ═ cos (currentrotate), y ═ 0, z ═ sin (currentrotate);
and (3) on the north side: x-cos (currentRotate + π/2), y-0, z-sin (currentRotate + π/2);
the south-east side: x ═ 2 (cos (currentRotate + π) + cos (currentRotate- π/2))/2, y ═ 0, z ═ 2 (sin (currentRotate + π) + sin (currentRotate- π/2))/2;
the northeast: x ═ 2 (cos (currentRotate + π) + cos (currentRotate + π/2))/2, y ═ 0, z ═ 2 (sin (currentRotate + π) + sin (currentRotate + π/2))/2;
southwest: x ═ cos (currentRotate) + cos (currentRotate- π/2))/2, y ═ 0, z ═ sin (currentRotate) + sin (currentRotate- π/2))/2;
northwest: x ═ 0, y ═ 0, z ═ sin (currentRotate)) + sin (currentRotate + pi/2))/2.
The logic code for calculating the three-dimensional point location coordinates of each preset position may be represented as:
switch(item.id){
case 1:// east
point.x=Math.cos(currentRotate+Math.PI);
point.y=0;
point.z=Math.sin(currentRotate+Math.PI);
break;
case 2:// south
point.x=Math.cos(currentRotate-Math.PI/2);
point.y=0;
point.z=Math.sin(currentRotate-Math.PI/2);
break;
case 3:// west
point.x=Math.cos(currentRotate);
point.y=0;
point.z=Math.sin(currentRotate);
break;
case 4:// North side
point.x=Math.cos(currentRotate+Math.PI/2);
point.y=0;
point.z=Math.sin(currentRotate+Math.PI/2);
break;
case 5:// southeast side
point.x=(Math.cos(currentRotate+Math.PI)+Math.cos(currentRotate-Math.PI/2))/2;
point.y=0;
point.z=(Math.sin(currentRotate+Math.PI)+Math.sin(currentRotate-Math.PI/2))/2;
break;
case 6:// northeast side
point.x=(Math.cos(currentRotate+Math.PI)+Math.cos(currentRotate+Math.PI/2))/2;
point.y=0;
point.z=(Math.sin(currentRotate+Math.PI)+Math.sin(currentRotate+Math.PI/2))/2;
break;
case 7:// southwest side
point.x=(Math.cos(currentRotate)+Math.cos(currentRotate-Math.PI/2))/2;
point.y=0;
point.z=(Math.sin(currentRotate)+Math.sin(currentRotate-Math.PI/2))/2;
break;
case 8:// northwest side
point.x=(Math.cos(currentRotate)+Math.cos(currentRotate+Math.PI/2))/2;
point.y=0;
point.z=(Math.sin(currentRotate)+Math.sin(currentRotate+Math.PI/2))/2;
break;
Based on the virtual camera rotation angle corresponding to the reference position of the current cell and the included angle between each preset position and the reference position, the three-dimensional point location coordinate of each preset position can be accurately determined, and therefore the accuracy of subsequent facility identification marks can be improved.
Step 130, according to the three-dimensional point location coordinates of each preset location, determining a point location of each preset location on the first panoramic model corresponding to the current cell, and marking the facility identifier on the point location to generate a second panoramic model corresponding to the current cell, where the facility identifier on the second panoramic model is used to link a facility live-action diagram corresponding to the facility identifier.
According to the three-dimensional point location coordinates of each preset position, a three-dimensional Vector can be calculated by combining with a Vector3 of ThreeJS, the point location of each preset position on the first panoramic model corresponding to the current cell is determined based on the three-dimensional Vector, the point location is marked on the first panoramic model, the facility identification corresponding to the preset position is marked on the point location, and therefore the second panoramic model corresponding to the current cell is generated. Wherein, Threejs is a 3D JavaScript library which is a JavaScript library under MIT permission and runs on WebGL, the aim of the library is to greatly simplify the processing process, only a few lines of codes are needed to obtain an animation 3D scene, and a shader and a matrix do not need to be provided. The first panoramic model and the second panoramic model may be cube-type panoramic models or other panoramic models.
When the live-action image of the current cell is displayed based on the second panoramic model, the facility identifier on the second panoramic model can be displayed, and if the selection operation of the facility identifier is detected, the facility live-action image of the facility outside the cell corresponding to the facility identifier can be skipped to. The facility identification corresponding to the preset position marked on each point location can be at least one.
In an embodiment of the present invention, the determining, according to the three-dimensional point location coordinate of each preset orientation, a point location of each preset orientation on the first panoramic model corresponding to the current cell includes: and for each preset direction, determining an intersection point of a target straight line corresponding to the preset direction and the first panoramic model as a point location of the preset direction on the first panoramic model corresponding to the current cell, wherein the target straight line is a connecting line between a three-dimensional point location coordinate corresponding to the preset direction and a central point of the first panoramic model.
And determining a connecting line between the three-dimensional point location coordinate corresponding to each preset position and the central point of the first panoramic model to obtain a target straight line, determining an intersection point of the target straight line and the first panoramic model of the current cell, and determining the intersection point as the point location of the preset position on the first panoramic model. By determining the intersection point of the target straight line and the first panoramic model as the point location of the preset position on the first panoramic model corresponding to the current cell, the accurate point location of each preset position on the first panoramic model can be obtained, and therefore the accuracy of the facility identification mark can be improved.
In an embodiment of the present invention, marking the facility identifier at the point location to generate a second panoramic model corresponding to the current cell includes: marking the point location on the first panoramic model, and marking the facility identifier corresponding to the point location marked on the first panoramic model, so as to generate a second panoramic model corresponding to the current cell.
After the point location of each preset position on the first panoramic model is determined, marking the point location on the first panoramic model, and marking the facility identification corresponding to the preset position on the first panoramic model corresponding to the point location, so as to obtain a second panoramic model corresponding to the current cell, wherein the point location of each preset position and the facility identification corresponding to the preset position are marked on the second panoramic model. By marking the point location and the facility identification of each preset position on the generated second panoramic model, when the live-action picture of the current cell is displayed based on the second panoramic model, the point location and the facility identification can be displayed at the same time, so that a user can conveniently check the position relation between the facility corresponding to the facility identification and the current cell.
In the data processing method provided in this embodiment, facility identifiers of a plurality of external facilities in a preset position in a current cell are obtained, a three-dimensional point location coordinate of each preset position is respectively determined based on a reference position of the current cell, a point location of each preset position on a first panoramic model corresponding to the current cell is determined according to the three-dimensional point location coordinate of each preset position, and a facility identifier is marked on the point location, so as to generate a second panoramic model corresponding to the current cell.
Fig. 3 is a flowchart of a data processing method provided in an embodiment of the present invention, where the data processing method may be executed by an electronic device such as a mobile phone or a computer, and as shown in fig. 3, the data processing method includes:
step 310, facility identifications of a plurality of preset positions outside the current cell are obtained.
And 320, respectively determining the three-dimensional point location coordinates of each preset position based on the reference position of the current cell.
Step 330, determining a point location of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset position, and marking the facility identifier on the point location to generate a second panoramic model corresponding to the current cell, where the facility identifier on the second panoramic model is used to link the facility live-action diagram corresponding to the facility identifier.
And 340, displaying the first live-action picture corresponding to the current cell based on the second panoramic model in a display interface.
And when the first real scene graph corresponding to the current cell is displayed in a display interface, acquiring the first real scene graph to be displayed from the second panoramic model based on the current display visual angle, and displaying the first real scene graph. The current cell and the real scene of the peripheral facilities outside the current cell can be included in the first real scene graph corresponding to the current cell.
Step 350, in response to the selection operation of the target facility identifier in the first live-action drawing corresponding to the current cell, switching the first live-action drawing corresponding to the current cell into the facility live-action drawing corresponding to the target facility identifier in the display interface.
Fig. 4 is a schematic view of a first real-scene graph display interface in an embodiment of the present invention, as shown in fig. 4, the first real-scene graph of a current cell displayed in the display interface may include at least one facility identifier 1 corresponding to the preset position, and may further include a point location 2 corresponding to the preset position, and when a selection operation of a user on one of the target facility identifiers is detected, the facility real-scene graph corresponding to the target facility identifier is obtained, and the first real-scene graph corresponding to the current cell is switched to the facility real-scene graph corresponding to the target facility identifier in the display interface. Optionally, the first live-action map of the current cell displayed in the display interface may also only display the facility identifier 1, but not display the point location 2 corresponding to the preset position.
In the data processing method provided in this embodiment, the first live-action image corresponding to the current cell is displayed based on the second panoramic model corresponding to the current cell in the display interface, and in response to the selection operation of the target facility identifier in the first live-action image corresponding to the current cell, the first live-action image corresponding to the current cell is switched to the facility live-action image corresponding to the target facility identifier in the display interface, so that the current cell is directly switched to the facility live-action image corresponding to the target facility identifier, and the operation of the user is simplified.
In an embodiment of the present invention, the switching, in response to a selection operation of a target facility identifier in a first live-action view corresponding to the current cell, the first live-action view corresponding to the current cell to a facility live-action view corresponding to the target facility identifier in the presentation interface includes: responding to the selection operation of a target facility identification in a first live-action picture corresponding to the current cell, and sending a data request to a server, wherein the data request is used for requesting facility data corresponding to the target facility identification from the server; and switching the first live-action image corresponding to the current cell into a facility live-action image corresponding to the target facility identifier in the display interface according to the second panoramic model corresponding to the target facility identifier in the facility data.
The facility data may include the second panoramic model corresponding to the target facility identifier, and may further include other data, for example, when the target facility corresponding to the target facility identifier is a cell, the facility data may further include house source information.
The first real view of the current cell displayed in the display interface may include at least one facility identifier corresponding to the preset position, when a selection operation of a user on one of the target facility identifiers is detected, a data request is sent to the server, after receiving the data request, the server returns facility data corresponding to the target facility identifier, a second panoramic model corresponding to the target facility identifier is obtained from the facility data, a facility real view to be displayed is determined from the second panoramic model corresponding to the target facility identifier based on an initial view angle corresponding to the target facility identifier, and the first real view corresponding to the current cell is switched to the facility real view corresponding to the target facility identifier in the display interface.
The method comprises the steps of responding to the selection operation of the target facility identification in the first live-action picture corresponding to the current cell, requesting facility data of the target equipment identification from the server in time, and switching the display interface to the facility live-action picture corresponding to the target facility identification, so that the switching of the live-action pictures of the cell and surrounding facilities is realized.
In an embodiment of the present invention, when the first live-action map of the current cell is switched to the facility live-action map corresponding to the target facility identifier in the presentation interface, the method further includes: and under the condition that the facility data comprises house source information, displaying the house source information corresponding to the target facility identification in the display interface.
The premises information may be included in the facility data when the target facility identifies the corresponding target facility as a target cell. When the facility data includes the house source information, the house source information corresponding to the target facility identifier may be displayed in the display interface (which may include the house source of the current cell, the house source near the cell, or other cell house sources similar to the house source of the current cell, etc.), so that the user can conveniently look up the house source of the current cell, the house source near the cell, or other cell house sources similar to the house source of the current cell, etc., and the user can conveniently find a house.
The facility data may further include any information corresponding to the current cell, such as cell introduction, identification of facilities in the cell, and the like.
In an embodiment of the present invention, the displaying, in the display interface, the first live-action view corresponding to the current cell based on the second panoramic model includes: determining a first live-action figure in an initial view angle range corresponding to the initial view angle from the second panoramic model based on the initial view angle of the current cell, and displaying the first live-action figure in the display interface, wherein the first live-action figure comprises at least one preset position in the initial view angle range and a facility identifier corresponding to each preset position.
The initial view may be a view corresponding to one of the preset orientations, or may be another view, for example, a cell center view.
And determining a virtual camera rotation angle corresponding to the initial view angle based on the initial view angle of the current cell, determining a first live-action image in the initial view angle range corresponding to the initial view angle from the second panoramic model based on the virtual camera rotation angle, and displaying the first live-action image in a display interface. At least one preset position in the initial visual angle range and facility identification corresponding to each preset position can be displayed in the first live-action image displayed on the display interface, and a user can conveniently select the facility identification to switch the live-action images.
In an embodiment of the present invention, the first live-action view further includes an orientation identifier corresponding to at least one preset orientation in the initial view angle range, the orientation identifier is used to link a second live-action view corresponding to the orientation identifier in the second panoramic model, and the orientation identifier corresponds to a view angle range, and the method further includes:
responding to the selection operation of a target azimuth mark in the first real scene graph, and switching the first real scene graph into a second real scene graph corresponding to the target azimuth mark in the display interface, wherein the second real scene graph comprises at least one preset azimuth in a target visual angle range corresponding to the target azimuth mark, facility marks corresponding to each preset azimuth and azimuth marks corresponding to at least one preset azimuth in the target visual angle range.
Fig. 5 is another schematic diagram of the first live-action-map display interface in the embodiment of the present invention, as shown in fig. 5, the first live-action map displayed in the display interface may further include an orientation identifier 3 corresponding to at least one preset orientation within the initial view angle range, where the orientation identifier 3 may be operated by a user to be linked to a second live-action map corresponding to the orientation identifier in the second panoramic model of the current cell, and each orientation identifier corresponds to a view angle range. When it is detected that the user selects the target azimuth mark in the first live-action picture, a second live-action picture corresponding to the target azimuth mark is determined from the second panoramic model based on the target view angle range corresponding to the target azimuth mark, and the first live-action picture is switched into the second live-action picture in the display interface. The second live-action image displayed on the display interface can further comprise at least one preset position in a target visual angle range corresponding to the target position identification, a facility identification corresponding to each preset position and a position identification corresponding to at least one preset position in the target visual angle range, the displayed facility identification and the displayed position identification can be operated by a user, and can be switched to the live-action image corresponding to the facility identification or the position identification based on user operation, so that the user can switch the position and check the live-action image corresponding to the position quickly, and the browsing experience of the user is improved.
Fig. 6 is a block diagram of a data processing apparatus according to an embodiment of the present invention, and as shown in fig. 6, the data processing apparatus includes:
a facility identifier obtaining module 610, configured to obtain facility identifiers of a plurality of preset locations outside a current cell;
a point location coordinate determining module 620, configured to determine, based on the reference position of the current cell, a three-dimensional point location coordinate of each preset position respectively;
a facility identifier marking module 630, configured to determine, according to the three-dimensional point location coordinates of each preset location, a point location of each preset location on the first panoramic model corresponding to the current cell, and mark the facility identifier on the point location to generate a second panoramic model corresponding to the current cell, where the facility identifier on the second panoramic model is used to link a facility real view map corresponding to the facility identifier.
Optionally, the facility identification mark module includes:
and a point location determining unit, configured to determine, for each preset orientation, an intersection point of a target straight line corresponding to the preset orientation and the first panoramic model as a point location of the preset orientation on the first panoramic model corresponding to the current cell, where the target straight line is a connection line between a three-dimensional point location coordinate corresponding to the preset orientation and a central point of the first panoramic model.
Optionally, the facility identification marking module further includes:
and a facility identifier marking unit, configured to mark the point location on the first panoramic model, and mark the facility identifier corresponding to the point location marked on the first panoramic model, so as to generate a second panoramic model corresponding to the current cell.
Optionally, the point location coordinate determining module is specifically configured to:
and respectively determining the three-dimensional point location coordinates of each preset position based on the virtual camera rotation angle corresponding to the reference position of the current cell and the included angle between each preset position and the reference position.
Optionally, the apparatus further comprises:
the live-action picture display module is used for displaying a first live-action picture corresponding to the current cell based on the second panoramic model in a display interface;
and the first live-action diagram switching module is used for responding to the selection operation of the target facility identification in the first live-action diagram corresponding to the current cell, and switching the first live-action diagram corresponding to the current cell into the facility live-action diagram corresponding to the target facility identification in the display interface.
Optionally, the first live-action-map switching module includes:
a data request sending unit, configured to send a data request to a server in response to a selection operation on a target facility identifier in a first live-action image corresponding to the current cell, where the data request is used to request, from the server, facility data corresponding to the target facility identifier;
and the live-action image switching unit is used for switching the first live-action image corresponding to the current cell into the facility live-action image corresponding to the target facility identifier in the display interface according to the second panoramic model corresponding to the target facility identifier in the facility data.
Optionally, the apparatus further comprises:
and the house source information display module is used for displaying the house source information corresponding to the target facility identification in the display interface under the condition that the facility data comprises the house source information.
Optionally, the live-action picture display module is specifically configured to:
determining a first live-action figure in an initial view angle range corresponding to the initial view angle from the second panoramic model based on the initial view angle of the current cell, and displaying the first live-action figure in the display interface, wherein the first live-action figure comprises at least one preset position in the initial view angle range and a facility identifier corresponding to each preset position.
Optionally, the first live-action image further includes an orientation identifier corresponding to at least one preset orientation within the initial view angle range, where the orientation identifier is used to link a second live-action image corresponding to the orientation identifier in the second panoramic model, and the orientation identifier corresponds to a view angle range, where the apparatus further includes:
and the second real scene graph switching module is used for responding to the selection operation of a target azimuth mark in the first real scene graph and switching the first real scene graph into a second real scene graph corresponding to the target azimuth mark in the display interface, wherein the second real scene graph comprises at least one preset azimuth in a target visual angle range corresponding to the target azimuth mark, facility marks corresponding to each preset azimuth and azimuth marks corresponding to at least one preset azimuth in the target visual angle range.
The data processing apparatus provided in the embodiment of the present invention is configured to implement each step of the data processing method described in the embodiment of the present invention, and for specific implementation of each module of the apparatus, reference is made to the corresponding step, which is not described herein again.
The data processing device provided by the embodiment of the invention respectively determines the three-dimensional point coordinates of each preset position on the basis of the reference position of the current cell by acquiring the facility identifications of a plurality of preset positions outside the current cell, determines the point position of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point coordinates of each preset position, marks the facility identification on the point position, and generates the second panoramic model corresponding to the current cell.
Preferably, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement the processes of the data processing method embodiments described above, and can achieve the same technical effects, and in order to avoid repetition, details are not described here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the data processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A data processing method, comprising:
acquiring facility identifications of a plurality of preset directions outside a current cell;
respectively determining the three-dimensional point location coordinates of each preset position based on the reference position of the current cell;
and determining the point location of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset position, and marking the facility identification on the point location to generate a second panoramic model corresponding to the current cell, wherein the facility identification on the second panoramic model is used for linking the facility real-scene graph corresponding to the facility identification.
2. The method according to claim 1, wherein the determining, according to the three-dimensional point location coordinates of each preset position, a point location of each preset position on the first panoramic model corresponding to the current cell includes:
and for each preset direction, determining an intersection point of a target straight line corresponding to the preset direction and the first panoramic model as a point location of the preset direction on the first panoramic model corresponding to the current cell, wherein the target straight line is a connecting line between a three-dimensional point location coordinate corresponding to the preset direction and a central point of the first panoramic model.
3. The method of claim 1 or 2, wherein marking the facility identifier at the point to generate a second panoramic model corresponding to the current cell comprises:
marking the point location on the first panoramic model, and marking the facility identification corresponding to the point location marked on the first panoramic model, so as to generate a second panoramic model corresponding to the current cell.
4. The method of claim 1, wherein the determining three-dimensional point location coordinates of each of the preset orientations based on the reference orientation of the current cell comprises:
and respectively determining the three-dimensional point location coordinates of each preset position based on the virtual camera rotation angle corresponding to the reference position of the current cell and the included angle between each preset position and the reference position.
5. The method of claim 1, further comprising:
displaying a first live-action picture corresponding to the current cell based on the second panoramic model in a display interface;
and responding to the selection operation of the target facility identification in the first real scene graph corresponding to the current cell, and switching the first real scene graph corresponding to the current cell into the facility real scene graph corresponding to the target facility identification in the display interface.
6. The method according to claim 5, wherein the switching the first live-action view corresponding to the current cell to the facility live-action view corresponding to the target facility identifier in the presentation interface in response to the selection operation of the target facility identifier in the first live-action view corresponding to the current cell comprises:
responding to the selection operation of a target facility identification in a first live-action picture corresponding to the current cell, and sending a data request to a server, wherein the data request is used for requesting facility data corresponding to the target facility identification from the server;
and switching the first live-action picture corresponding to the current cell into a facility live-action picture corresponding to the target facility identification in the display interface according to the second panoramic model corresponding to the target facility identification in the facility data.
7. The method according to claim 6, wherein when switching the first live-action view of the current cell to the facility live-action view corresponding to the target facility identifier in the presentation interface, further comprising:
and under the condition that the facility data comprises house source information, displaying the house source information corresponding to the target facility identification in the display interface.
8. The method of claim 5, wherein the displaying, in the display interface, the first live-action view corresponding to the current cell based on the second panoramic model comprises:
determining a first live-action figure in an initial view angle range corresponding to the initial view angle from the second panoramic model based on the initial view angle of the current cell, and displaying the first live-action figure in the display interface, wherein the first live-action figure comprises at least one preset position in the initial view angle range and a facility identifier corresponding to each preset position.
9. The method of claim 8, wherein the first live-action map further comprises an orientation identifier corresponding to at least one of the preset orientations within the initial viewing angle range, the orientation identifier being used to link a second live-action map corresponding to the orientation identifier in the second panoramic model, the orientation identifier corresponding to a viewing angle range, the method further comprising:
responding to the selection operation of a target azimuth mark in the first live-action picture, switching the first live-action picture into a second live-action picture corresponding to the target azimuth mark in the display interface, wherein the second live-action picture comprises at least one preset azimuth in a target visual angle range corresponding to the target azimuth mark, a facility mark corresponding to each preset azimuth and an azimuth mark corresponding to at least one preset azimuth in the target visual angle range.
10. A data processing apparatus, comprising:
the facility identification acquisition module is used for acquiring facility identifications of a plurality of preset directions outside the current cell;
the point location coordinate determination module is used for respectively determining three-dimensional point location coordinates of each preset position based on the reference position of the current cell;
and the facility identification marking module is used for determining the point location of each preset position on the first panoramic model corresponding to the current cell according to the three-dimensional point location coordinates of each preset position, and marking the facility identification on the point location to generate a second panoramic model corresponding to the current cell, wherein the facility identification on the second panoramic model is used for linking the facility real-scene graph corresponding to the facility identification.
11. An electronic device, comprising: processor, memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the data processing method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the data processing method according to any one of claims 1 to 9.
CN202210609382.2A 2022-05-31 2022-05-31 Data processing method, device, electronic equipment and storage medium Active CN115129213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210609382.2A CN115129213B (en) 2022-05-31 2022-05-31 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210609382.2A CN115129213B (en) 2022-05-31 2022-05-31 Data processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115129213A true CN115129213A (en) 2022-09-30
CN115129213B CN115129213B (en) 2024-04-26

Family

ID=83377692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210609382.2A Active CN115129213B (en) 2022-05-31 2022-05-31 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115129213B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105405046A (en) * 2015-12-28 2016-03-16 徐亦隽 House checking, renting and selling method of house sharing platform
CN107220726A (en) * 2017-04-26 2017-09-29 消检通(深圳)科技有限公司 Fire-fighting equipment localization method, mobile terminal and system based on augmented reality
US20180200628A1 (en) * 2011-06-03 2018-07-19 Nintendo Co., Ltd. Storage medium storing information processing program, information processing device, information processing system, and information processing method
CN108830692A (en) * 2018-06-20 2018-11-16 厦门市超游网络科技股份有限公司 Long-range panorama sees room method, apparatus, user terminal, server and storage medium
CN108958460A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Building sand table methods of exhibiting and system based on virtual reality
CN110174950A (en) * 2019-05-28 2019-08-27 广州视革科技有限公司 A kind of method for changing scenes based on transmission gate
CN111340598A (en) * 2020-03-20 2020-06-26 北京爱笔科技有限公司 Method and device for adding interactive label
CN112182433A (en) * 2020-09-25 2021-01-05 瑞庭网络技术(上海)有限公司 Display switching method and device
CN112182432A (en) * 2020-09-25 2021-01-05 瑞庭网络技术(上海)有限公司 House resource display method and device
CN113115023A (en) * 2020-01-09 2021-07-13 百度在线网络技术(北京)有限公司 Panoramic scene switching method, device and equipment
WO2022063276A1 (en) * 2020-09-25 2022-03-31 瑞庭网络技术(上海)有限公司 Method and device for displaying house listings, electronic device, and machine-readable medium
CN114511684A (en) * 2021-01-07 2022-05-17 深圳思为科技有限公司 Scene switching method and device, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180200628A1 (en) * 2011-06-03 2018-07-19 Nintendo Co., Ltd. Storage medium storing information processing program, information processing device, information processing system, and information processing method
CN105405046A (en) * 2015-12-28 2016-03-16 徐亦隽 House checking, renting and selling method of house sharing platform
CN107220726A (en) * 2017-04-26 2017-09-29 消检通(深圳)科技有限公司 Fire-fighting equipment localization method, mobile terminal and system based on augmented reality
CN108958460A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Building sand table methods of exhibiting and system based on virtual reality
CN108830692A (en) * 2018-06-20 2018-11-16 厦门市超游网络科技股份有限公司 Long-range panorama sees room method, apparatus, user terminal, server and storage medium
CN110174950A (en) * 2019-05-28 2019-08-27 广州视革科技有限公司 A kind of method for changing scenes based on transmission gate
CN113115023A (en) * 2020-01-09 2021-07-13 百度在线网络技术(北京)有限公司 Panoramic scene switching method, device and equipment
CN111340598A (en) * 2020-03-20 2020-06-26 北京爱笔科技有限公司 Method and device for adding interactive label
CN112182433A (en) * 2020-09-25 2021-01-05 瑞庭网络技术(上海)有限公司 Display switching method and device
CN112182432A (en) * 2020-09-25 2021-01-05 瑞庭网络技术(上海)有限公司 House resource display method and device
WO2022063276A1 (en) * 2020-09-25 2022-03-31 瑞庭网络技术(上海)有限公司 Method and device for displaying house listings, electronic device, and machine-readable medium
CN114511684A (en) * 2021-01-07 2022-05-17 深圳思为科技有限公司 Scene switching method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115129213B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US11217019B2 (en) Presenting image transition sequences between viewing locations
US8803992B2 (en) Augmented reality navigation for repeat photography and difference extraction
US20110279478A1 (en) Virtual Tagging Method and System
US20220058888A1 (en) Image processing method and apparatus, and computer storage medium
CN108269305A (en) A kind of two dimension, three-dimensional data linkage methods of exhibiting and system
CN111340598B (en) Method and device for adding interactive labels
US11609345B2 (en) System and method to determine positioning in a virtual coordinate system
Gomez-Jauregui et al. Quantitative evaluation of overlaying discrepancies in mobile augmented reality applications for AEC/FM
KR102097416B1 (en) An augmented reality representation method for managing underground pipeline data with vertical drop and the recording medium thereof
WO2019164830A1 (en) Apparatus, systems, and methods for tagging building features in a 3d space
CN108171801A (en) A kind of method, apparatus and terminal device for realizing augmented reality
JP2022507502A (en) Augmented Reality (AR) Imprint Method and System
JP7001711B2 (en) A position information system that uses images taken by a camera, and an information device with a camera that uses it.
CN111127661B (en) Data processing method and device and electronic equipment
KR20190047922A (en) System for sharing information using mixed reality
US11756267B2 (en) Method and apparatus for generating guidance among viewpoints in a scene
CN114089836B (en) Labeling method, terminal, server and storage medium
CN113961066B (en) Visual angle switching method and device, electronic equipment and readable medium
CN115129213A (en) Data processing method and device, electronic equipment and storage medium
CN113452842B (en) Flight AR display method, system, computer equipment and storage medium
CN112825198B (en) Mobile tag display method, device, terminal equipment and readable storage medium
US20200242797A1 (en) Augmented reality location and display using a user-aligned fiducial marker
CN114708407A (en) Virtual three-dimensional space information display method, device and program product
CN111857341B (en) Display control method and device
CN115965742A (en) Space display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant