CN112068752B - Space display method and device, electronic equipment and storage medium - Google Patents

Space display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112068752B
CN112068752B CN202010889091.4A CN202010889091A CN112068752B CN 112068752 B CN112068752 B CN 112068752B CN 202010889091 A CN202010889091 A CN 202010889091A CN 112068752 B CN112068752 B CN 112068752B
Authority
CN
China
Prior art keywords
display page
text
touch
display
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010889091.4A
Other languages
Chinese (zh)
Other versions
CN112068752A (en
Inventor
王晓彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 58 Information Technology Co Ltd
Original Assignee
Beijing 58 Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 58 Information Technology Co Ltd filed Critical Beijing 58 Information Technology Co Ltd
Priority to CN202010889091.4A priority Critical patent/CN112068752B/en
Publication of CN112068752A publication Critical patent/CN112068752A/en
Application granted granted Critical
Publication of CN112068752B publication Critical patent/CN112068752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate

Abstract

The invention provides a space display method and device, electronic equipment and a storage medium. The method comprises the following steps: displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label; in response to detecting a trigger operation on a first interaction control in a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, acquiring a detected text input operation; generating a text label according to text data corresponding to the text input operation, recording the mapping relation between a target object marked by the text input operation and the text label, and displaying the thumbnail icon of the text label in a display page of the virtual three-dimensional space. Therefore, the user can carry out interactive operation at any time in the process of space browsing and roaming, and the user is assisted to record information in space in time.

Description

Space display method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the technical field of three-dimensional space, and in particular, to a space display method and apparatus, an electronic device, and a storage medium.
Background
In the existing house renting and house watching process, a client can browse a digital interface (such as interactive interfaces of VR, AR, panorama and the like) of a 3D space of a house source on the internet, but in the digital interface of the 3D space, information in a real-scene space is simply displayed, and an operation mode of recording and marking the information in the real-scene space in real time is lacked. The user can not directly record information of the target object in the live-action space in the process of browsing and roaming in the live-action space, such as marking details, recording problems and the like.
Disclosure of Invention
The embodiment of the invention provides a space display method and device, electronic equipment and a storage medium, and aims to solve the problems that the real-time recording requirements of users cannot be met in the existing real-scene space display.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a space display method, including:
displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label;
in response to detecting a trigger operation on the first interaction control in a display page of a virtual three-dimensional space of a target house on the touch-sensitive display screen, acquiring a detected text input operation;
generating a text label according to text data corresponding to the text input operation, recording a mapping relation between a target object marked by the text input operation and the text label, and displaying a thumbnail icon of the text label in a display page of the virtual three-dimensional space;
the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space.
Optionally, the step of generating a text label according to text data input by the text input operation, recording a mapping relationship between a target object marked by the text input operation and the text label, and displaying a thumbnail icon of the text label in a display page of the virtual three-dimensional space includes:
acquiring a text input operation aiming at the display page, and displaying a text display page in the display page in a text input process, wherein the text display page is used for displaying a text corresponding to the text input operation;
in response to detecting a saving instruction for the text display page on the touch-sensitive display screen, generating a text label for the text, and recording a mapping relation between a target object marked by the text input operation and the text label;
and canceling the display of the text display page in the display page, and displaying the thumbnail icon of the text label in the display page of the virtual three-dimensional space.
Optionally, the method further comprises:
synchronizing the text label to a third party display page on a third party touch sensitive display screen which synchronously displays the virtual three-dimensional space of the target house with the touch sensitive display screen, and synchronously displaying the thumbnail icon of the text label in the third party display page;
the third party display page is a user interface for synchronously displaying the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
Optionally, the interactive control panel further includes a second interactive control for triggering a sliding marker, and the method further includes:
in response to detecting a trigger operation on the second interaction control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring a sliding operation detected on the touch-sensitive display screen;
and displaying a mark trace corresponding to the sliding track in the display page according to the sliding track of the sliding operation, and recording a mapping relation between a target object marked by the sliding operation and the mark trace.
Optionally, the method further comprises:
and synchronously displaying the mark trace in a third party display page on a third party touch sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch sensitive display screen according to the mapping relation between the target object marked by the sliding operation and the mark trace.
Optionally, the interactive control panel further includes a third interactive control for triggering real-time position indication, and the method further includes:
in response to detecting a triggering operation on the third interactive control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring a real-time position indication operation detected on the touch-sensitive display screen;
and displaying a real-time position mark corresponding to the real-time position indication operation in the display page according to the real-time position indication operation, and recording the position information of the real-time position mark.
Optionally, the method further comprises:
and synchronously displaying the real-time position mark in a third party display page on a third party touch sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch sensitive display screen according to the position information of the real-time position mark.
Optionally, the step of displaying a presentation page of the virtual three-dimensional space of the target house on the touch-sensitive display screen includes:
displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises a floating window control used for triggering display of the interactive control panel;
and responding to a triggering instruction aiming at the floating window control, and displaying the interactive control panel in the display interface.
Optionally, the method further comprises:
responding to a voice control instruction aiming at the display page, and collecting interactive voice of a user;
and controlling the display content in the display page according to the acquired interactive voice, and displaying the feedback information of the interactive voice through the display page.
Optionally, the feedback information includes at least one of feedback voice and feedback text;
the step of controlling the display content in the display page according to the collected interactive voice and displaying the feedback information of the interactive voice through the display page comprises the following steps:
carrying out voice recognition on the collected interactive voice, and displaying a space picture corresponding to a voice recognition result in the display page;
displaying a feedback text corresponding to the voice recognition result in a display page displaying the space picture;
and generating feedback voice corresponding to the feedback text according to the feedback text and playing the feedback voice.
In a second aspect, an embodiment of the present invention provides a space display apparatus, including:
the space display module is used for displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label;
the first interactive operation module is used for responding to the trigger operation of the first interactive control in the display page of the virtual three-dimensional space of the target house, detected text input operation is obtained;
the text label generating module is used for generating a text label according to text data corresponding to the text input operation, recording the mapping relation between a target object marked by the text input operation and the text label, and displaying a thumbnail icon of the text label in a display page of the virtual three-dimensional space;
the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space.
Optionally, the text label generating module includes:
the text display sub-module is used for acquiring a text input operation aiming at the display page and displaying a text display page in the text input process, wherein the text display page is used for displaying a text corresponding to the text input operation;
a text label generation sub-module, configured to generate a text label for the text in response to detecting a save instruction for the text presentation page on the touch-sensitive display screen, and record a mapping relationship between a target object marked by the text input operation and the text label;
and the thumbnail icon display sub-module is used for canceling the display of the text display page in the display page and displaying the thumbnail icon of the text label in the display page of the virtual three-dimensional space.
Optionally, the apparatus further comprises:
the first information synchronization module is used for synchronizing the text label to a third party display page on a third party touch-sensitive display screen which synchronously displays the virtual three-dimensional space of the target house with the touch-sensitive display screen, and synchronously displaying the thumbnail icon of the text label in the third party display page;
the third party display page is a user interface for synchronously displaying the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
Optionally, the interactive control panel further includes a second interactive control for triggering a sliding marker, and the apparatus further includes:
the second interactive operation module is used for responding to the trigger operation of the second interactive control in the display page of the virtual three-dimensional space of the target house, detected on the touch-sensitive display screen, and acquiring the sliding operation detected on the touch-sensitive display screen;
and the sliding marking module is used for displaying a marking trace corresponding to the sliding track in the display page according to the sliding track of the sliding operation and recording a mapping relation between a target object marked by the sliding operation and the marking trace.
Optionally, the apparatus further comprises:
and the second information synchronization module is used for synchronously displaying the mark trace in a third party display page on a third party touch sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch sensitive display screen according to the mapping relation between the target object marked by the sliding operation and the mark trace.
Optionally, the interactive control panel further includes a third interactive control for triggering real-time position indication, and the apparatus further includes:
the third interactive operation module is used for responding to the trigger operation of the third interactive control in the display page of the virtual three-dimensional space of the target house, detected on the touch-sensitive display screen, and acquiring the real-time position indication operation detected on the touch-sensitive display screen;
and the real-time position indicating module is used for displaying a real-time position mark corresponding to the real-time position indicating operation in the display page according to the real-time position indicating operation and recording the position information of the real-time position mark.
Optionally, the apparatus further comprises:
and the third information synchronization module is used for synchronously displaying the real-time position mark in a third party display page on a third party touch sensitive display screen which synchronously displays the virtual three-dimensional space of the target house with the touch sensitive display screen according to the position information of the real-time position mark.
Optionally, the space display module includes:
the space display sub-module is used for displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, and the display page of the virtual three-dimensional space of the target house comprises a floating window control used for triggering and displaying the interactive control panel;
and the interactive control panel display module is used for responding to a trigger instruction aiming at the floating window control and displaying the interactive control panel in the display interface.
Optionally, the apparatus further comprises:
the interactive voice acquisition module is used for responding to a voice control instruction aiming at the display page and acquiring the interactive voice of the user;
and the fourth interactive operation module is used for controlling the display content in the display page according to the collected interactive voice and displaying the feedback information of the interactive voice through the display page.
Optionally, the feedback information includes at least one of feedback voice and feedback text;
the fourth interoperation module is specifically configured to:
carrying out voice recognition on the collected interactive voice, and displaying a space picture corresponding to a voice recognition result in the display page;
displaying a feedback text corresponding to the voice recognition result in a display page displaying the space picture;
and generating feedback voice corresponding to the feedback text according to the feedback text and playing the feedback voice.
In a third aspect, an embodiment of the present invention additionally provides an electronic device, including: memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the space exhibition method according to the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the space displaying method according to the first aspect are implemented.
In the embodiment of the invention, a display page of a virtual three-dimensional space of a target house is displayed on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label; in response to detecting a trigger operation on a first interaction control in a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, acquiring a detected text input operation; generating a text label according to text data corresponding to the text input operation, recording the mapping relation between a target object marked by the text input operation and the text label, and displaying the thumbnail icon of the text label in a display page of the virtual three-dimensional space. Therefore, the user can carry out interactive operation on the space and objects in the space at any time in the space browsing and roaming process, the user is assisted to browse the space and record information in time, and the beneficial effect of the information indication definition in the real-time interaction of the user in the space roaming process is also improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a flow chart of steps of a spatial display method according to an embodiment of the present invention;
FIG. 2 is a flow chart of steps of another spatial display method in an embodiment of the present invention;
FIG. 3A is a diagram of a presentation page during a process of adding a text label according to an embodiment of the present invention;
fig. 3B is a schematic diagram of displaying a mark trace corresponding to a sliding track in a display page according to an embodiment of the present invention;
FIG. 3C is a schematic diagram illustrating a real-time indication of a location in a presentation page in an embodiment of the invention;
FIG. 3D is a diagram of a presentation page in an embodiment of the invention;
FIG. 3E is a diagram illustrating an exemplary interactive control panel being displayed by long pressing the display page;
FIG. 3F is a diagram illustrating that a floating window control in a presentation page is directly clicked to trigger presentation of an interactive control panel according to an embodiment of the present invention;
FIG. 3G is a schematic diagram of directly presenting an interactive control panel containing interactive controls in a presentation page according to an embodiment of the present invention;
FIG. 3H is a diagram illustrating a speech control displayed in a display page according to an embodiment of the present invention;
FIG. 3I is a diagram illustrating a voice dialog menu displayed in a display page according to an embodiment of the present invention;
FIG. 3J is a diagram illustrating a process of voice interaction in a presentation page in an embodiment of the present invention;
FIG. 3K is a diagram illustrating another process of voice interaction in a presentation page in an embodiment of the invention;
FIG. 3L is a diagram illustrating a feedback control of progress in a presentation page according to an embodiment of the present invention;
FIG. 3M is a schematic diagram of a voice interaction process in an embodiment of the invention;
FIG. 4 is a schematic structural diagram of a space displaying apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic structural view of another space display apparatus in an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a space displaying method according to an embodiment of the present invention is shown.
Step 110, displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label;
step 120, in response to detecting a trigger operation on the first interaction control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring a detected text input operation;
step 130, generating a text label according to text data corresponding to the text input operation, recording a mapping relation between a target object marked by the text input operation and the text label, and displaying a thumbnail icon of the text label in a display page of the virtual three-dimensional space; the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space.
In the embodiment of the invention, in order to enable a user to perform interactive operation of information recording such as indication, selection, character marking and the like on a space and an object in the space at any time when the user roams in a 3D (3D) space in a data interface, the user is assisted to record in time in a VR real scene space, and the problem that the information indication is ambiguous in real-time interaction of the user in the VR house-watching process can be solved.
Specifically, while a display page of a target house is displayed, the interactive operation of a user is collected to control the display content in the display page and record the interactive information of the interactive operation.
Furthermore, in the embodiment of the present invention, the display page of the target house may be displayed in any available electronic device, and the display page of the target house may be triggered in any available manner, which is not limited in the embodiment of the present invention.
For example, after receiving a space exhibition request for a target house, the spatial data of the target house may be obtained according to an object identifier carried by the space exhibition request, and the spatial data may include, but is not limited to, at least one of visual dimensional data (e.g., exhibition data of a virtual three-dimensional space, etc.) and auditory dimensional data (e.g., environmental audio data of the periphery of the target house, etc.) of the target house. Of course, in the embodiment of the present invention, the spatial data may also be configured to include haptic dimension data (e.g., haptic data of different objects in the target house) according to requirements, and the embodiment of the present invention is not limited thereto.
The object identifier may be any kind of identification information that can represent a target house, for example, the object identifier may be set to be a house name, a house address, and the like. The visual dimension data may be any data that is acquired for the target house and is visually observable, and may include, for example and without limitation, a picture of the target house, a video, a virtual object generated based on the target house (e.g., a three-dimensional spatial model of the house), a virtual animation, and so forth. The auditory dimension data may be any data related to the audio data collected for the target premises, and may include, for example and without limitation, the collected audio data files, the time of collection of the audio data in each audio data file, the location of collection, the volume of the audio data, and so forth. The target house may also be any other object that can be spatially displayed, and the target house may be replaced by any other spatial structure according to a specific application scenario, which is not limited in this embodiment of the present invention.
After the spatial data of the target house is obtained, a display page of a Virtual three-dimensional space of the target house may be further displayed on the touch-sensitive display screen, and the auditory dimension data is perceived based on the display page, where the display page may include, but is not limited to, at least one of a VR (Virtual Reality) display interface, an AR (Augmented Reality) display interface, and a panoramic display interface.
In the embodiment of the present invention, the display page of the target house may be generated in any available manner, which is not limited in the embodiment of the present invention. At this time, in order to improve the sense of reality and immersion when the user roams in the 3D space, a presentation page may be set as a digital interface of the 3D space. On the basis of the presentation page, the auditory dimension data can be combined, and the auditory dimension data of the target house is played based on the presentation page, for example, the audio data in the auditory dimension data is played, the volume of each audio data in the auditory dimension data is identified, and the like.
For example, when the visual dimension data and the auditory dimension data are combined, the sound insulation effect, the surrounding noise condition and the like of a room can be perceived while watching the room, and the comprehensive perception of the house information when the user watches the room on the 3D interface is enhanced.
Specifically, in the embodiment of the present invention, in order to facilitate a user to browse a virtual three-dimensional space and record information in time, at least one first interaction control for triggering addition of a text label may be included in the interaction control panel.
In the embodiment of the present invention, the first interactive control in the interactive control panel may also be triggered in any available manner, which is not limited in the embodiment of the present invention. For example, the interactive control panel may be displayed in the display page by long pressing the touch-sensitive display screen where the display page is located, or by directly clicking a floating window control preset in the display page and used for starting the display interactive control panel, and the interactive control panel may also be directly displayed in the display page while the display page of the virtual three-dimensional space of the target house is displayed on the touch-sensitive display screen, without triggering the display interactive control panel twice, which is not limited in the embodiment of the present invention. And then the first interaction control in the interaction control panel can be triggered in any available mode such as clicking the first interaction control. The appearance, the display position and the like of the interactive control panel and the first interactive control can be set in a user-defined mode according to requirements, and the embodiment of the invention is not limited.
After the interactive operation input by the user is acquired, the display content in the display page can be further controlled according to the acquired interactive operation, and the interactive information of the interactive operation is recorded. The interactive information may include any information related to the interactive operation, such as the type of the interactive operation, the time of the interactive operation, an operation trace corresponding to the interactive operation, data generated by the interactive operation, and so on. Moreover, for different interactive operations, the corresponding interactive information may be different, and may be specifically set by user according to requirements, which is not limited in the embodiment of the present invention.
For example, if the interactive operation is to switch the display content in the display page, the switched content may be displayed in the display page, and the display content before and after the switching of the current interactive operation, the operation time, and any required interactive information may be displayed; if the interactive operation is the interactive operation of marking partial pictures and/or partial articles in the display page, the mark trace generated by the current interactive operation can be displayed in the display page, and the position information of the mark generated by the current interactive operation, the marked pictures and/or articles, the operation time and other interactive information can be recorded; and so on.
If the triggering operation of the first interaction control in the display page of the virtual three-dimensional space of the target house is detected on the touch-sensitive display screen, the detected text input operation can be acquired; generating a text label according to text data corresponding to the text input operation, recording a mapping relation between a target object marked by the text input operation and the text label, and displaying a thumbnail icon of the text label in a display page of the virtual three-dimensional space; the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space.
The text label may be generated in any available manner, and the embodiment of the present invention is not limited thereto. Moreover, the appearance of the thumbnail icon of the text label can be set by self-definition according to requirements, and the embodiment of the invention is not limited. The target object marked by the text input operation can also be selected by the user in the display page according to requirements. For example, upon triggering the first interactive control, a list of selectable tagged objects may be presented in a pop-up window or the like, from which the user may select a tagged target object, and so on, prior to entering text.
In the embodiment of the invention, the user can perform interactive operation of information recording on the space at any time in the space browsing and roaming process, the user is assisted to record information in the space in time, and the information indication definition in the real-time interaction of the user in the space roaming process is also improved.
Referring to fig. 2, in an embodiment of the present invention, the step 130 may further include:
step 131, acquiring a text input operation for the display page, and displaying a text display page in the display page in a text input process, wherein the text display page is used for displaying a text corresponding to the text input operation;
step 132, in response to detecting a saving instruction for the text display page on the touch-sensitive display screen, generating a text label for the text, and recording a mapping relationship between a target object marked by the text input operation and the text label;
and step 133, canceling the display of the text display page in the display page, and displaying the thumbnail icon of the text label in the display page of the virtual three-dimensional space.
In order to facilitate a user to accurately know and check text data input by the user during information recording, a text display page may be displayed in the display page during text input, where the text display page is used to display a text corresponding to the text input operation, as shown in fig. 3A (1), a middle position of the display page is a blank text display page, and 3A (2) is a text display page with text data. Of course, the appearance, the display position, and the like of the text display page can be set by self according to the requirements, and the embodiment of the present invention is not limited.
If a saving instruction for the text display page is detected on the touch-sensitive display screen, a text label for the text can be generated, and the mapping relation between the target object marked by the text input operation and the text label is recorded; meanwhile, the display of a text display page in the display page can be cancelled, and the thumbnail icon of the text label is displayed in the display page of the virtual three-dimensional space.
The saving instruction for the text presentation page may be input in any available manner, and the embodiment of the present invention is not limited thereto. For example, a control for triggering a save instruction may be set in the text presentation page shown in fig. 3A (1) and (2), as shown in the "save" control in fig. 3A (1) and (2), and a user may trigger a save instruction for text data therein by clicking on the control to generate a text label. And a control for triggering to cancel the text can be further set in the text presentation page, as shown in the control "do not exist first" in fig. 3A (1) and (2), and the user can trigger a cancel instruction by clicking the control, so as to cancel the text data therein and cancel the presentation of the text presentation page.
Moreover, in order to avoid the text display page interfering with the sight of the user and simultaneously prompting the user to view the text labels, the text display page can be cancelled from being displayed in the display page while the text labels are generated, and the thumbnail icons of the text labels are displayed in the display page of the virtual three-dimensional space, as indicated by arrows in fig. 3A (3), the thumbnail icons are the thumbnail icons of the text labels.
The format of the text label, the appearance of the thumbnail icon of the text label and the like can be set by self-definition according to requirements, and the embodiment of the invention is not limited. Moreover, in practical applications, a user may set a plurality of text labels through a text input operation, so as to facilitate the user to subsequently search for different text labels, the text labels may be saved locally, and each text label may be recorded in any available form such as a list, and a target object for which a mapping relationship exists among the text labels, and the list may also be set to default to a hidden state, and an icon or the like for triggering presentation of the corresponding list is generated in the presentation page, and the list for presenting the text labels may be triggered by clicking the corresponding icon, or by a sliding operation from the bottom of the touch-sensitive display screen upwards, as shown in fig. 3A (4), and accordingly, by a sliding operation of dragging the list downwards, the list for hiding the text labels may be triggered, and the like.
Referring to fig. 2, in an embodiment of the present invention, the method further includes:
step 140, synchronizing the text label to a third party display page on a third party touch sensitive display screen which synchronously displays the virtual three-dimensional space of the target house with the touch sensitive display screen, and synchronously displaying the thumbnail icon of the text label in the third party display page; the third party display page is a user interface for synchronously displaying the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
In practical applications, the virtual three-dimensional space of the target house may be displayed through a plurality of touch-sensitive display screens at the same time, and there may be a plurality of users of the virtual three-dimensional space displayed at the same time for the virtual three-dimensional space of the target house associated with each other and needing to interact with each other. For example, in a house display process, a house keeper and a house manager can respectively display a virtual three-dimensional space of a house at different positions through different electronic devices, and in order to facilitate information intercommunication between the house keeper and the house manager, display contents in a display page of the virtual three-dimensional space of a target house in touch-sensitive display screens of electronic devices used by both the house keeper and the house manager can be synchronized in real time, and one person can know interactive operation executed by the other person in the display page.
Accordingly, the text label may be synchronized to a third party presentation page on a third party touch sensitive display screen that displays the virtual three dimensional space of the target house in synchronization with the touch sensitive display screen, and the thumbnail icon of the text label may be synchronously presented in the third party presentation page; the third party display page is a user interface for synchronously displaying the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
In addition, in practical applications, there may be a plurality of electronic devices that are simultaneously showing the same target house, but there may be only some users of the electronic devices who desire to share information, such as the house manager and the house attendant described above, but there may also be some users of the electronic devices who do not desire to share information, such as between some house attendants described above, and so on.
Therefore, in the embodiment of the present invention, when information sharing synchronization is performed, it may also be set that only display pages of a synchronization display target house are displayed, and a certain condition needs to be satisfied between the display pages, for example, accounts of the display pages are related to each other, or accounts of the display pages are the same account, or the display pages are connected to the same server, or a user in an electronic device to which a touch-sensitive display screen where the display pages are located makes a real-time call, and the like. The specific conditions to be met can be set by user according to the requirements, and the embodiment of the invention is not limited.
Referring to fig. 2, in the embodiment of the present invention, the interactive control panel further includes a second interactive control for triggering a slide mark, and the method further includes:
step 150, in response to detecting the trigger operation of the second interactive control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring the sliding operation detected on the touch-sensitive display screen;
step 160, displaying a mark trace corresponding to the sliding track in the display page according to the sliding track of the sliding operation, and recording a mapping relation between a target object marked by the sliding operation and the mark trace.
In practical applications, when the virtual three-dimensional space of the target house is displayed, a user may need to mark a part of target objects in the display page in real time and display mark traces in the display page. Therefore, in the embodiment of the present invention, in order to facilitate the sliding mark and distinguish from the first interactive control, a second interactive control for triggering the sliding mark may be disposed in the interactive control panel. The appearance of the second interactive control can be set by self-definition according to requirements, and the embodiment of the invention is not limited.
If the triggering operation of the second interaction control in the display page of the virtual three-dimensional space of the target house is detected on the touch-sensitive display screen, acquiring the sliding operation detected on the touch-sensitive display screen; and then according to the sliding track of the sliding operation, displaying a mark trace corresponding to the sliding track in the display page, and recording a mapping relation between a target object marked by the sliding operation and the mark trace. The appearance of the mark can be set by self-definition according to requirements, and the embodiment of the invention is not limited. For example, the mark trace may be set to be consistent with the sliding track, and the mark trace is a curve that can be displayed as a specified color in the display page, and so on. Fig. 3B is a schematic diagram of displaying a mark trace corresponding to the sliding track in the display page according to the collected sliding track of the sliding operation, where the mark trace is in a circling manner. The user can trigger the interactive mode of starting the sliding mark by clicking the interactive control in the shape of the 'brush', namely the second interactive control, in the interactive control panel.
After the sliding operation is acquired, the marked target object can be identified through the sliding track of the sliding operation, and the mapping relationship between the currently marked target object and the mark trace, such as the picture name, the item name, the position information of the picture in the target house, the position information of the item in the target house, and any information related to the currently marked picture, the item, and other target objects, can be recorded, and the content to be recorded can be set by the user according to the requirement, which is not limited in the embodiment of the present invention.
Referring to fig. 2, in an embodiment of the present invention, the method further includes:
step 170, synchronously displaying the mark trace in a third party display page on a third party touch sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch sensitive display screen according to the mapping relation between the target object marked by the sliding operation and the mark trace.
After obtaining the mark trace and the mapping relationship between the target object marked by the sliding operation and the mark trace, in order to facilitate synchronous display of the third-party display page of the target house with the display page on the touch-sensitive display screen, and also synchronously display the mark trace, the mark trace can be synchronously displayed in the third-party display page on the third-party touch-sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch-sensitive display screen according to the mapping relationship between the target object marked by the sliding operation and the mark trace, so that the display contents of the third-party display page and the display page are synchronous in real time.
Referring to fig. 2, in the embodiment of the present invention, the interactive control panel further includes a third interactive control for triggering a real-time position indication, and the method further includes:
step 180, in response to detecting a trigger operation on the third interactive control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring a real-time position indication operation detected on the touch-sensitive display screen;
and 190, displaying a real-time position mark corresponding to the real-time position indication operation in the display page according to the real-time position indication operation, and recording position information of the real-time position mark.
In the embodiment of the present invention, in order to facilitate a user to indicate a real-time position in a display page, a third interactive control for triggering real-time position indication may be further set in the interactive control panel, and an appearance of the third interactive control may be set by user according to a requirement, which is not limited in the embodiment of the present invention.
Then, if a trigger operation on the third interactive control in the display page of the virtual three-dimensional space of the target house is detected on the touch-sensitive display screen, a real-time position indication operation detected on the touch-sensitive display screen is acquired, and then a real-time position mark corresponding to the real-time position indication operation can be displayed in the display page according to the real-time position indication operation, and position information of the real-time position mark is recorded. The appearance of the real-time position mark can be set by self-definition according to requirements, and the embodiment of the invention is not limited. For example, a real-time location marker may be set as an icon in a specified color and in a specified shape, and so on. And the position information of the real-time position marker may include any information related to the position of the real-time position marker, for example, the position information of the real-time position marker may include any space object within the virtual three-dimensional house space of the corresponding house currently being shown in the show page, then the position information of the real-time position marker at this time may include three-dimensional coordinate information of the real-time position marker within the corresponding virtual three-dimensional space, and so on. As shown in fig. 3C, which is a schematic diagram of real-time position indication when the third interactive control is triggered, the real-time position can be moved from the position shown in fig. 3C (1) (in the figure, the circular spot is a real-time position mark) to the position shown in fig. 3C (2). For example, the user may trigger an interaction mode for performing real-time position indication by clicking a "laser pointer" interaction control, that is, a third interaction control, in the interaction control panel.
The specific form of the real-time position indication operation may be set by user according to requirements, and the embodiment of the present invention is not limited.
Referring to fig. 2, in an embodiment of the present invention, the method further includes:
step 1110, according to the position information of the real-time position mark, synchronously displaying the real-time position mark in a third-party display page on a third-party touch-sensitive display screen which synchronously displays the virtual three-dimensional space of the target house with the touch-sensitive display screen.
Accordingly, for information synchronization, after a real-time position mark is generated each time, the real-time position mark may be synchronously displayed in a third-party display page on a third-party touch-sensitive display screen that synchronously displays the virtual three-dimensional space of the target house with the touch-sensitive display screen according to the position information of the real-time position mark.
Referring to fig. 2, in the embodiment of the present invention, step 110 may further include:
step 111, displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises a floating window control used for triggering and displaying the interactive control panel;
and step 112, responding to a triggering instruction aiming at the floating window control, and displaying the interactive control panel in the display interface.
As described above, in the embodiment of the present invention, the interactive control panel may include at least one interactive control, and in order to avoid that the interactive control panel occupies a large number of positions and affects a visual effect of a display content in the display page, the display page of the virtual three-dimensional space of the target house may be set to include a floating window control for triggering display of the interactive control panel, and the interactive control panel is displayed in the display interface only when a trigger instruction for the floating window control is received.
The user may trigger the trigger instruction for the floating window control in any available manner, which is not limited in the embodiment of the present invention. For example, a floating window control in a presentation page may be clicked directly, a trigger instruction for the floating window control may be triggered, and so on. Alternatively, the display interactive control panel may be triggered by pressing the display page for a long time, which is not limited in this embodiment of the present invention. In addition, when a trigger operation for any interactive control in the interactive control panel is received, the interactive control panel displayed in the display page can be restored to the initial floating window control, in addition, in order to facilitate the user to know the currently triggered interactive control, the floating window control in the display page can be replaced by the currently triggered interactive control under the condition of triggering the interactive control, and the embodiment of the invention is not limited. Fig. 3D is a schematic diagram of a display page, fig. 3E is a schematic diagram of triggering a display interaction control panel by pressing the display page for a long time, fig. 3F is a schematic diagram of triggering an interaction start instruction by directly clicking a floating window control in the display page, wherein a second icon at the upper right corner in fig. 3D to 3F is a floating window control for triggering the display interaction control panel.
Of course, in real time, the interactive control panel including each interactive control may also be directly displayed in the display page, instead of the floating window control for triggering the display of the interactive control panel. Fig. 3G is a schematic diagram illustrating that an interactive control panel including each interactive control is directly displayed in a display page, at this time, the top right corner in fig. 3G is three interactive controls, that is, the interactive control panel includes three interactive controls, which may be the first interactive control, the second interactive control, and the third interactive control.
After the interactive control in the interactive control panel is triggered, the interactive operation corresponding to the interactive control can be further detected. For the electronic equipment where the display page is located, only the interactive operation input by the interactive operation mode corresponding to the interactive control currently triggered by the user can be acquired, the interactive operation in other interactive operation modes cannot be acquired, the problem of disordered acquisition of the interactive operation can be avoided, and the problem of ambiguous information indication in real-time interaction of the user is effectively solved. For example, upon triggering a first interactive control, only detected text input operations may be acquired, while upon triggering a second interactive control, only captured slide operations may be acquired, and so on.
Referring to fig. 2, in the embodiment of the present invention, the method may further include:
step 1120, responding to a voice control instruction aiming at the display page, and collecting interactive voice of a user;
and step 1130, controlling the display content in the display page according to the acquired interactive voice, and displaying the feedback information of the interactive voice through the display page.
In addition, in the embodiment of the invention, in order to facilitate the user to control the display content in the display page, when the user roams in the virtual three-dimensional space, the collected interactive voice can be identified by identifying the voice control instruction sent by the user, the display content in the display page can be controlled according to different interactive voices, the feedback information of the interactive voice can be displayed through the display page, the user can be intelligently guided and answered, the immersive interactive experience of the user in the space display process is enhanced, the problem of incomplete information acquisition in the space preview process of watching rooms and the like of the user is solved, and meanwhile, the control convenience can be improved.
Specifically, the interactive voice of the user can be collected in response to a voice control instruction for the display page; and controlling the display content in the display page according to the acquired interactive voice, and displaying the feedback information of the interactive voice through the display page.
The voice control instruction for the presentation page may be triggered in any available manner, and the embodiment of the present invention is not limited thereto. For example, in a presentation page, a user may trigger a "voice control (e.g., a voice assistant)" of the presentation page by using a voice assistant automatically (e.g., when watching for a certain time (e.g., 5-10s), that is, when content in the presentation page remains unchanged for a certain time and no other operation is performed, or continuously rotating for a certain time, that is, when content in the presentation page continuously changes for a certain time and no other operation is performed) or actively (e.g., when the user actively clicks the voice control), so as to trigger a voice control instruction for a corresponding presentation page. Moreover, in the embodiment of the present invention, in order to prompt that the user has triggered the voice control instruction, when the voice control is triggered, a voice dialog menu appearing in the display page may be set, and the user may ask a question through a natural language or click operation, and meanwhile, under the condition that the voice control instruction is triggered, the interactive voice of the user may be collected, for example, the interactive voice asking a question through a natural language or click operation, and the like. The voice control may be a floating icon arranged in the display page, for example, a circular icon at the lower right corner of fig. 3H, when the voice control is triggered, the voice dialog menu triggered and displayed in the display page may be as shown in fig. 3I, and the user may select any text in the voice dialog menu as the interactive voice for asking questions, or the user may ask questions through natural voice, and then the interactive voice asked questions through natural voice of the user may be collected, and it is convenient for the user to confirm whether the collected interactive voice is consistent with the desired interactive voice, and a text corresponding to the interactive voice may also be displayed in the display page, as shown in fig. 3J.
After the interactive voice input by the user is acquired, the display content in the display page can be controlled according to the acquired interactive voice, and the feedback information of the interactive voice is displayed through the display page. For example, feedback information including an answer to the interactive voice may be output by the voice assistant through natural voice recognition-processing-synthesis according to the content of the question in the interactive voice, and the user may respond to the question, as shown in fig. 3K, which is a schematic diagram of a text that is fed back to the question shown in fig. 3J. The display content in the display page can be controlled according to the control instruction in the interactive voice of the user, for example, under the condition that the target house is a house space, the functions of room switching, function calling, intelligent guidance and the like can be executed in the display page.
In addition, in the process of controlling the display content in the display page, the progress can be fed back and controlled in the forms of voice, text and the like. Fig. 3L is a schematic diagram of feeding back the control progress of the display content in the display page through a text format, and certainly, the text for representing the control progress in the voice broadcast graph may also be used at the same time.
Optionally, in an embodiment of the present invention, the feedback information includes at least one of feedback voice and feedback text;
the step 1130 may further include:
step S1, carrying out voice recognition on the collected interactive voice, and displaying a space picture corresponding to the voice recognition result in the display page;
step S2, displaying a feedback text corresponding to the voice recognition result in a display page displaying the space picture;
and step S3, generating feedback voice corresponding to the feedback text according to the feedback text and playing the feedback voice.
In the embodiment of the invention, in order to synchronously switch the display content (such as a space picture and the like) in the display page in the process of executing the interactive voice, and inform a user of feedback information (such as the space direction of the switched space picture and the like) in a voice and character font mode, the problems of lacking direction sense and the like in the space display process are solved.
Specifically, speech recognition can be performed on the collected interactive speech, and a spatial picture corresponding to the speech recognition result is displayed in the display page; and displaying a feedback text corresponding to the voice recognition result in a display page displaying the space picture, and generating and playing feedback voice corresponding to the feedback text according to the feedback text.
In addition, in the embodiment of the present invention, the voice recognition may be performed on the interactive voice in any available manner, the feedback text corresponding to the voice recognition result may be obtained in any available manner, and the specific output form of the feedback text and the feedback voice may also be set by the user according to the requirement, which is not limited in the embodiment of the present invention.
Fig. 3M is a schematic diagram of a voice interaction flow, where, in a case where a user triggers a voice interaction instruction, the interactive voice of the user may be collected by a voice assistant, and the collected interactive voice may be encoded, decoded, and output in text at a voice recognition stage, so as to obtain a voice recognition result after voice recognition, that is, a "question" input by the user, and when obtaining feedback information, a feedback text may be obtained based on a "natural language processing" process in fig. 3M, such as semantic analysis, attribute extraction, semantic calculation retrieval, candidate result output, and the like, so as to generate a feedback voice corresponding to the feedback text by a voice synthesis or the like.
In the embodiment of the invention, the intelligent voice question-answer interaction solves the problem that the information acquisition is incomplete in the process of space display such as live-action house watching and the like of a user, and improves the efficiency of acquiring the space information such as house resources and the like; instruction execution is carried out through voice interaction, and the immersive room-viewing interactive experience of a user in the virtual three-dimensional space is enhanced; secondly, in the instruction execution process, the space is synchronously switched, the space direction of a user is informed through a voice and character adding type, and the problem of directional sense loss in the display process of the virtual three-dimensional space is solved.
Referring to fig. 4, a schematic structural diagram of a space display apparatus according to an embodiment of the present invention is shown.
The space display device of the embodiment of the invention comprises: a space presentation module 210, a first interaction module 220, and a text label generation module 230.
The functions of the modules and the interaction relationship between the modules are described in detail below.
The space display module 210 is configured to display a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, where the display page of the virtual three-dimensional space of the target house includes an interactive control panel, and the interactive control panel includes at least one first interactive control for triggering addition of a text label;
the first interaction operation module 220 is used for responding to the trigger operation of the first interaction control in the display page of the virtual three-dimensional space of the target house, which is detected on the touch-sensitive display screen, and acquiring the detected text input operation;
a text tag generating module 230, configured to generate a text tag according to text data corresponding to the text input operation, record a mapping relationship between a target object marked by the text input operation and the text tag, and display a thumbnail icon of the text tag in a display page of the virtual three-dimensional space;
the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space.
Referring to fig. 5, in an embodiment of the present invention, the text label generating module 230 further includes:
the text display sub-module 231 is configured to obtain a text input operation for the display page, and display a text display page in the display page in a text input process, where the text display page is used to display a text corresponding to the text input operation;
a text label generating sub-module 232, configured to generate a text label for the text and record a mapping relationship between a target object marked by the text input operation and the text label in response to detecting a saving instruction for the text presentation page on the touch-sensitive display screen;
and the thumbnail icon display sub-module 233 is configured to cancel the display of the text display page in the display page, and display the thumbnail icon of the text label in the display page in the virtual three-dimensional space.
Referring to fig. 5, in an embodiment of the present invention, the apparatus may further include:
a first information synchronization module 240, configured to synchronize the text tag to a third party presentation page on a third party touch-sensitive display screen that displays the virtual three-dimensional space of the target house in synchronization with the touch-sensitive display screen, and to synchronously present a thumbnail icon of the text tag in the third party presentation page;
the third party display page is a user interface for synchronously displaying the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
Referring to fig. 5, in the embodiment of the present invention, the interactive control panel further includes a second interactive control for triggering a sliding marker, and the apparatus may further include:
a second interactive operation module 250, configured to, in response to detecting, on the touch-sensitive display screen, a trigger operation to the second interactive control in the presentation page of the virtual three-dimensional space of the target house, acquire a sliding operation detected on the touch-sensitive display screen;
and the sliding marking module 260 is configured to display a mark trace corresponding to the sliding track in the display page according to the sliding track of the sliding operation, and record a mapping relationship between a target object marked by the sliding operation and the mark trace.
Referring to fig. 5, in an embodiment of the present invention, the apparatus may further include:
and a second information synchronization module 270, configured to synchronously display the mark trace in a third party display page on a third party touch-sensitive display screen that synchronously displays the virtual three-dimensional space of the target house with the touch-sensitive display screen according to a mapping relationship between the target object marked by the sliding operation and the mark trace.
Referring to fig. 5, in an embodiment of the present invention, the interactive control panel further includes a third interactive control for triggering a real-time position indication, and the apparatus may further include:
a third interactive operation module 280, configured to, in response to detecting a trigger operation on the third interactive control in the presentation page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquire a real-time position indication operation detected on the touch-sensitive display screen;
and the real-time position indicating module 290 is configured to display a real-time position mark corresponding to the real-time position indicating operation in the display page according to the real-time position indicating operation, and record position information of the real-time position mark.
Referring to fig. 5, in an embodiment of the present invention, the apparatus may further include:
the third information synchronization module 2110 is configured to synchronously display the real-time position mark in a third-party display page on a third-party touch-sensitive display screen, where the third-party touch-sensitive display screen synchronously displays the virtual three-dimensional space of the target house according to the position information of the real-time position mark.
Referring to fig. 5, in an embodiment of the present invention, the space display module 210 may further include:
the space display sub-module 211 is configured to display a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, where the display page of the virtual three-dimensional space of the target house includes a floating window control for triggering display of the interactive control panel;
and an interactive control panel display module 212, configured to respond to the trigger instruction for the floating window control, and display the interactive control panel in the display interface.
Referring to fig. 5, in an embodiment of the present invention, the apparatus may further include:
the interactive voice acquisition module 2120 is configured to respond to the voice control instruction for the display page and acquire an interactive voice of the user;
a fourth interactive operation module 2130, configured to control, according to the collected interactive voice, the display content in the display page, and display feedback information of the interactive voice through the display page.
Optionally, the feedback information includes at least one of feedback voice and feedback text; the fourth interactive operation module 2130 may be specifically configured to:
carrying out voice recognition on the collected interactive voice, and displaying a space picture corresponding to a voice recognition result in the display page;
displaying a feedback text corresponding to the voice recognition result in a display page displaying the space picture; and generating feedback voice corresponding to the feedback text according to the feedback text and playing the feedback voice.
The space display device provided by the embodiment of the invention can realize each process realized in the method embodiments of fig. 1 to fig. 2, and is not repeated here to avoid repetition.
Preferably, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement the processes of the above-mentioned spatial display method embodiment, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the above-mentioned embodiment of the spatial display method, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 6, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A space display method is characterized by comprising the following steps:
displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label;
in response to detecting a trigger operation on the first interaction control in a display page of a virtual three-dimensional space of a target house on the touch-sensitive display screen, acquiring a detected text input operation;
generating a text label according to the text data corresponding to the text input operation, and recording the mapping relation between the target object marked by the text input operation and the text label, wherein the mapping relation comprises the following steps:
acquiring a text input operation aiming at the display page, and displaying a text display page in the display page in a text input process, wherein the text display page is used for displaying a text corresponding to the text input operation;
in response to detecting a saving instruction for the text display page on the touch-sensitive display screen, generating a text label for the text, and recording a mapping relation between a target object marked by the text input operation and the text label; displaying the thumbnail icon of the text label in a display page of the virtual three-dimensional space; the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space;
synchronizing the text label to a third party display page on a third party touch-sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch-sensitive display screen, and synchronously displaying the abbreviated icon of the text label in the third party display page, wherein the third party display page is a user interface synchronously displaying the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
2. The method according to claim 1, wherein the step of generating a text label according to the text data input by the text input operation, recording the mapping relationship between the target object marked by the text input operation and the text label, and displaying the thumbnail icon of the text label in the display page of the virtual three-dimensional space comprises:
and canceling the display of the text display page in the display page, and displaying the thumbnail icon of the text label in the display page of the virtual three-dimensional space.
3. The method of claim 1, further comprising a second interactive control for triggering a slide marker in the interactive control panel, the method further comprising:
in response to detecting a trigger operation on the second interaction control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring a sliding operation detected on the touch-sensitive display screen;
and displaying a mark trace corresponding to the sliding track in the display page according to the sliding track of the sliding operation, and recording a mapping relation between a target object marked by the sliding operation and the mark trace.
4. The method of claim 3, further comprising:
and synchronously displaying the mark trace in a third party display page on a third party touch sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch sensitive display screen according to the mapping relation between the target object marked by the sliding operation and the mark trace.
5. The method of claim 1, further comprising a third interactive control for triggering a real-time position indication in the interactive control panel, the method further comprising:
in response to detecting a triggering operation on the third interactive control in the display page of the virtual three-dimensional space of the target house on the touch-sensitive display screen, acquiring a real-time position indication operation detected on the touch-sensitive display screen;
and displaying a real-time position mark corresponding to the real-time position indication operation in the display page according to the real-time position indication operation, and recording the position information of the real-time position mark.
6. The method of claim 5, further comprising:
and synchronously displaying the real-time position mark in a third party display page on a third party touch sensitive display screen synchronously displaying the virtual three-dimensional space of the target house with the touch sensitive display screen according to the position information of the real-time position mark.
7. The method according to any one of claims 1-6, wherein the step of displaying a presentation page of the virtual three-dimensional space of the target house on a touch-sensitive display screen comprises:
displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises a floating window control used for triggering display of the interactive control panel;
and responding to a triggering instruction aiming at the floating window control, and displaying the interactive control panel in the display page.
8. The method according to any one of claims 1-6, further comprising:
responding to a voice control instruction aiming at the display page, and collecting interactive voice of a user;
and controlling the display content in the display page according to the acquired interactive voice, and displaying the feedback information of the interactive voice through the display page.
9. The method of claim 8, wherein the feedback information comprises at least one of feedback speech, feedback text;
the step of controlling the display content in the display page according to the collected interactive voice and displaying the feedback information of the interactive voice through the display page comprises the following steps:
carrying out voice recognition on the collected interactive voice, and displaying a space picture corresponding to a voice recognition result in the display page;
displaying a feedback text corresponding to the voice recognition result in a display page displaying the space picture;
and generating feedback voice corresponding to the feedback text according to the feedback text and playing the feedback voice.
10. A space display apparatus, comprising:
the space display module is used for displaying a display page of a virtual three-dimensional space of a target house on a touch-sensitive display screen, wherein the display page of the virtual three-dimensional space of the target house comprises an interactive control panel, and the interactive control panel at least comprises a first interactive control used for triggering addition of a text label;
the first interactive operation module is used for responding to the trigger operation of the first interactive control in the display page of the virtual three-dimensional space of the target house, detected text input operation is obtained;
a text label generating module, configured to obtain a text input operation for the display page, and display a text display page in the display page in a text input process, where the text display page is used to display a text corresponding to the text input operation, generate a text label for the text in response to detecting a save instruction for the text display page on the touch-sensitive display screen, record a mapping relationship between a target object marked by the text input operation and the text label, and display a thumbnail icon of the text label in the display page in the virtual three-dimensional space; the target object comprises at least one of a picture area in the display page, a space object to which the picture area in the display page belongs and a marker object in the display page, wherein the space object is an area of any space in the target house in the virtual three-dimensional space, and the marker object is a simulation object of any item in the target house in the virtual three-dimensional space;
and the text tag synchronization module is used for synchronizing the text tag to a third party display page on a third party touch-sensitive display screen which synchronously displays the virtual three-dimensional space of the target house with the touch-sensitive display screen, and synchronously displaying the thumbnail icon of the text tag in the third party display page, wherein the third party display page synchronously displays the user interface of the target house with the display page on the touch-sensitive display screen, and the third party touch-sensitive display screen and the touch-sensitive display screen are different touch-sensitive display screens.
11. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the space exhibition method of any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the space exhibition method according to one of the claims 1 to 9.
CN202010889091.4A 2020-08-28 2020-08-28 Space display method and device, electronic equipment and storage medium Active CN112068752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010889091.4A CN112068752B (en) 2020-08-28 2020-08-28 Space display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010889091.4A CN112068752B (en) 2020-08-28 2020-08-28 Space display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112068752A CN112068752A (en) 2020-12-11
CN112068752B true CN112068752B (en) 2022-01-11

Family

ID=73659727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010889091.4A Active CN112068752B (en) 2020-08-28 2020-08-28 Space display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112068752B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112667086B (en) * 2021-01-04 2023-06-23 瑞庭网络技术(上海)有限公司 Interaction method and device for VR house watching
CN113535064B (en) * 2021-09-16 2022-02-01 北京亮亮视野科技有限公司 Virtual label marking method and device, electronic equipment and storage medium
CN114003323A (en) * 2021-09-17 2022-02-01 北京城市网邻信息技术有限公司 Information display method, device, equipment and storage medium
CN113961107B (en) * 2021-09-30 2024-04-16 西安交通大学 Screen-oriented augmented reality interaction method, device and storage medium
CN114385299A (en) * 2022-01-12 2022-04-22 北京字跳网络技术有限公司 Page display control method and device, mobile terminal and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680588A (en) * 2015-02-13 2015-06-03 上海同筑信息科技有限公司 BIM-based event marking method and system
CN105825551A (en) * 2016-03-11 2016-08-03 广州视睿电子科技有限公司 Three-dimensional tag realization method and apparatus
CN107908763A (en) * 2017-11-24 2018-04-13 曾良军 Methods of exhibiting, device and the terminal device of application content
CN108693960A (en) * 2017-04-12 2018-10-23 麦奇教育集团有限公司 Interactive instructional system
CN108765084A (en) * 2018-05-30 2018-11-06 链家网(北京)科技有限公司 A kind of synchronization processing method and device of virtual three-dimensional space
CN108776917A (en) * 2018-05-30 2018-11-09 链家网(北京)科技有限公司 A kind of synchronization processing method and device of virtual three-dimensional space
CN108961418A (en) * 2018-06-06 2018-12-07 杭州亿间科技有限公司 A kind of Knowledge Visualization interface system and method based on virtual three-dimensional space
CN109934736A (en) * 2019-01-21 2019-06-25 广东康云科技有限公司 A kind of intelligence sees the data processing method and system in room
CN110888530A (en) * 2019-11-19 2020-03-17 上海萃钛智能科技有限公司 3D visual editor and editing method based on electronic map
CN111213206A (en) * 2017-07-07 2020-05-29 Time2市场公司 Method and system for providing a user interface for a three-dimensional environment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102263772A (en) * 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
US10284794B1 (en) * 2015-01-07 2019-05-07 Car360 Inc. Three-dimensional stabilized 360-degree composite image capture

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680588A (en) * 2015-02-13 2015-06-03 上海同筑信息科技有限公司 BIM-based event marking method and system
CN105825551A (en) * 2016-03-11 2016-08-03 广州视睿电子科技有限公司 Three-dimensional tag realization method and apparatus
CN108693960A (en) * 2017-04-12 2018-10-23 麦奇教育集团有限公司 Interactive instructional system
CN111213206A (en) * 2017-07-07 2020-05-29 Time2市场公司 Method and system for providing a user interface for a three-dimensional environment
CN107908763A (en) * 2017-11-24 2018-04-13 曾良军 Methods of exhibiting, device and the terminal device of application content
CN108765084A (en) * 2018-05-30 2018-11-06 链家网(北京)科技有限公司 A kind of synchronization processing method and device of virtual three-dimensional space
CN108776917A (en) * 2018-05-30 2018-11-09 链家网(北京)科技有限公司 A kind of synchronization processing method and device of virtual three-dimensional space
CN108961418A (en) * 2018-06-06 2018-12-07 杭州亿间科技有限公司 A kind of Knowledge Visualization interface system and method based on virtual three-dimensional space
CN109934736A (en) * 2019-01-21 2019-06-25 广东康云科技有限公司 A kind of intelligence sees the data processing method and system in room
CN110888530A (en) * 2019-11-19 2020-03-17 上海萃钛智能科技有限公司 3D visual editor and editing method based on electronic map

Also Published As

Publication number Publication date
CN112068752A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112068752B (en) Space display method and device, electronic equipment and storage medium
CN109905754B (en) Virtual gift receiving method and device and storage equipment
EP2400733B1 (en) Mobile terminal for displaying augmented-reality information
CN108737904B (en) Video data processing method and mobile terminal
CN113132787A (en) Live content display method and device, electronic equipment and storage medium
CN110768805B (en) Group message display method and electronic equipment
CN110557565B (en) Video processing method and mobile terminal
CN110087117A (en) A kind of video broadcasting method and terminal
CN110866038A (en) Information recommendation method and terminal equipment
CN108491130A (en) A kind of application programe switch-over method and mobile terminal
CN110944139B (en) Display control method and electronic equipment
CN109871164A (en) A kind of message method and terminal device
CN109618218B (en) Video processing method and mobile terminal
CN109495638B (en) Information display method and terminal
CN112312217A (en) Image editing method and device, computer equipment and storage medium
CN115767164A (en) Information display method, client, electronic equipment and storage medium
CN109669710B (en) Note processing method and terminal
CN109947988B (en) Information processing method and device, terminal equipment and server
CN111046211A (en) Article searching method and electronic equipment
CN111093033B (en) Information processing method and device
CN112232898A (en) Space display method and device, electronic equipment and storage medium
CN111400552A (en) Note creating method and electronic equipment
CN108287644B (en) Information display method of application program and mobile terminal
CN107896282B (en) Schedule viewing method and device and terminal
CN110928616A (en) Shortcut icon management method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant