CN113963133B - Method and device for generating house source watching route, electronic equipment and readable medium - Google Patents
Method and device for generating house source watching route, electronic equipment and readable medium Download PDFInfo
- Publication number
- CN113963133B CN113963133B CN202111111818.7A CN202111111818A CN113963133B CN 113963133 B CN113963133 B CN 113963133B CN 202111111818 A CN202111111818 A CN 202111111818A CN 113963133 B CN113963133 B CN 113963133B
- Authority
- CN
- China
- Prior art keywords
- house source
- initial
- interface
- label
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The embodiment of the invention provides a method and a device for generating a house source line with a watch function, electronic equipment and a readable medium, wherein the method comprises the following steps: the content displayed in the graphical user interface of the electronic terminal at least comprises a house source interface corresponding to a target house source and a route design control, wherein the target house source comprises at least one functional space, the electronic terminal can respond to touch operation acting on the route design control to display a house source video interface corresponding to the target house source, at least one target house source video clip is selected in the house source video interface, and the process control is touched, namely an initial area watching path can be generated in a preview interface, and according to the initial area watching path, an house source area watching path matched with the target house source is generated, so that an area watching user can perform custom design on the area watching path of the target house source by selecting the house source video clip corresponding to the functional space of the target house source, and the requirements of different users are met.
Description
Technical Field
The present invention relates to the field of house source display technologies, and in particular, to a method and an apparatus for generating a house source line with a watch function, an electronic device, and a computer readable medium.
Background
In order to facilitate the room renting and room purchasing of a user, the user can have the experience of seeing the room personally on the scene, most room renting application programs provide VR live-action room watching services, and the room source display in the current VR scene basically adopts an online live broadcast mode. In the live broadcast process, the house source economic person can control the trend of the screen and introduces the house source economic person in cooperation with voice. In addition, part of products provide a live broadcast recording function, video is generated after the live broadcast, and the live content can be watched for watching the review if the live broadcast is missed. However, in the process, the VR scene of the house source is recorded before live broadcast, and the house source broker introduces the whole VR scene of the house source, so that on one hand, the house source economic person cannot provide personalized content display for different users, and is difficult to meet the demand of finding house in differentiation, and on the other hand, the house-finding user does not preview the content of the house source, and the user lacks efficiency in obtaining information.
Disclosure of Invention
The embodiment of the invention provides a room source watching route generation method and device, electronic equipment and a readable medium, and aims to solve the problems that in the prior art, a watching route cannot be customized individually and differentiation requirements are difficult to meet in a VR watching live broadcast process.
The invention discloses a generation method of a house source with a watching route, which is characterized in that the content displayed through a graphical user interface of an electronic terminal at least comprises a house source interface corresponding to a target house source and a route design control; wherein the target source comprises at least one functional space, the method comprising:
responding to touch operation acted on the route design control, and displaying a house source video interface corresponding to the target house source, wherein the house source video interface comprises a flow control and at least one house source video clip corresponding to the functional space;
responding to the selection operation aiming at the room source video clips, selecting at least one target room source video clip, and acquiring sequence information corresponding to the at least one target room source video clip;
responding to touch operation acting on the flow control, and displaying a preview interface in the graphical user interface, wherein the preview interface comprises an initial viewing path corresponding to the sequence information;
and generating a house source watching route matched with the target house source according to the initial watching path.
Optionally, the generating a house source watching path matched with the target house source according to the initial watching path includes:
responding to touch operation acting on the roaming route generation control, and displaying a roaming route editing interface, wherein the roaming route editing interface at least comprises roaming point information corresponding to the video content control, a house type graph corresponding to the target house source and a roaming route which is displayed in the house type graph and corresponds to the initial watching path;
and generating a house source watching route matched with the target house source according to the roaming route and the roaming point information, wherein the roaming point information comprises a function space name and a label name corresponding to the video content control.
Optionally, the roaming route editing interface includes a publishing and sharing control, and the generating a house source watching route matched with the target house source according to the roaming route and the roaming point information includes:
responding to the touch operation acting on the issuing and sharing control, and generating a house source watching route aiming at the target house source by adopting the roaming route and the roaming point information.
Optionally, the roaming route includes a positioning identifier corresponding to roaming point information, and the method further includes:
and responding to the touch operation aiming at the positioning identifier, determining target roaming point information corresponding to the touch operation, and switching the target roaming point information from a first display style to a second display style.
Optionally, the roaming point information includes a preview effect control, and the method further includes:
and responding to the touch operation acted on the preview effect control, determining target roaming point information corresponding to the touch operation, and playing a target house source video clip corresponding to the target roaming point information.
Optionally, the displaying a preview interface in the graphical user interface in response to the touch operation acting on the process control includes:
responding to touch operation acting on the flow control, displaying a preview interface in the graphical user interface, and displaying an initial viewing path corresponding to the sequence information in the preview interface;
and sequentially playing the target house source video clips corresponding to the video content controls in the preview interface according to the sequence information.
Optionally, the method further comprises:
and responding to the end of the dragging operation acted on the video content control, and adjusting the sequence information of the video content control in the initial watching path to obtain a target video watching path.
Optionally, the method further comprises:
responding to touch operation acting on any position in the preview interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and responding to a first input operation acted on the label editing window, acquiring first label information corresponding to the first input operation, and generating an initial label identifier corresponding to the initial display position.
Optionally, the tag editing window further includes a tag name control and a tag content control, the tag content control further includes an add picture control and an add link control, the response is performed on a first input operation of the tag editing window, first tag information corresponding to the first input operation is obtained, and an initial tag identifier corresponding to the initial display position is generated, including:
responding to a first input operation aiming at the label name control to acquire label name information;
and/or responding to a second input operation aiming at the label content control to acquire label content information;
and/or responding to a third input operation aiming at the added picture control to acquire label picture information;
and/or responding to a fourth input operation aiming at the added link control to acquire label link information;
and generating an initial label identifier corresponding to the initial position by adopting at least one of the label name information, the label content information, the label picture information and the label link information.
Optionally, the method further comprises:
responding to touch operation acting on the initial label identification, and displaying a label editing window corresponding to the initial label identification;
and responding to a second input operation aiming at the label editing window, acquiring second label information corresponding to the second input operation, and generating a target label identifier corresponding to the initial display position according to the second label information.
Optionally, the label editing window further includes a label deletion control, and the method further includes:
responding to touch operation acting on the initial label identification, and displaying a label editing window corresponding to the initial label identification;
and responding to the touch operation acted on the label deleting control, and deleting the initial label identification corresponding to the initial display position in the preview interface.
Optionally, the responding to a touch operation applied to any position on the graphical user interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a tag editing window for the initial display position in the preview interface, includes:
and responding to touch operation acting on any position on the graphical user interface, pausing the playing of a current target room source video clip corresponding to a current video content control, acquiring an initial display position of the touch operation on the graphical user interface, a target video frame and a target time point corresponding to the initial display position in the current room source video clip, and displaying a label editing window aiming at the initial display position in the preview interface.
Optionally, the tab editing window includes a tab publishing control therein, and the method further includes:
and responding to touch operation acting on the label issuing control, and generating an initial label identifier corresponding to the initial display position on the target video frame and the target time point acquired in the preview interface according to the first label information.
Optionally, the method further comprises:
and responding to the end of the dragging operation acted on the initial label identification, determining the target display position corresponding to the graphical user interface of the dragging operation, and controlling the initial label identification to move from the initial display position to the target display position.
Optionally, the method further comprises:
responding to the dragging operation acting on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and if the target display position meets a preset condition, displaying a deletion area in the graphical user interface;
and responding to drag of the initial label identification to the deletion area, and deleting the initial label identification in the preview interface.
Optionally, the responding to a drag operation acting on the initial tag identifier obtains a target display position of the initial tag identifier in the graphical user interface, and if the target display position meets a preset condition, a deletion area is displayed in the graphical user interface, including:
and responding to the dragging operation acted on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and displaying a deleted area in the graphical user interface if the distance between the target display position and the deleted area is smaller than a preset distance threshold.
Optionally, the responding to a drag operation acting on the initial tag identifier obtains a target display position of the initial tag identifier in the graphical user interface, and if the target display position meets a preset condition, a deletion area is displayed in the graphical user interface, including:
and responding to the dragging operation acted on the initial label identification, acquiring the dragging operation time of the initial label identification in the graphical user interface corresponding to the dragging operation, and if the dragging operation time is greater than a preset time threshold, displaying a deletion area in the graphical user interface.
Optionally, the preview interface further includes an edit control switch, and the method further includes:
under the condition that the editing control switch is in an open state, responding to touch operation acting on any position in the preview interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and under the condition that the editing control switch is in a closed state, the touch operation for generating the label editing window is not responded.
Optionally, the roaming route editing interface further includes a preview control, and the method further includes:
responding to touch operation acting on a preview control, displaying a preview interface, acquiring an initial viewing path corresponding to the roaming route, and playing a target video clip corresponding to the initial viewing path;
and when the playing progress reaches the time point, displaying the initial tag identification corresponding to the time point and the associated information interface corresponding to the tag identification.
Optionally, the room source video interface further includes a new control, and the method further includes:
responding to touch operation acting on the newly-built control, displaying an information acquisition interface in the graphical user interface, and displaying a current function space in the information acquisition interface;
and responding to the acquisition operation aiming at the information acquisition interface, acquiring the house source video clip corresponding to the current functional space, and displaying the house source video clip on the house source video interface.
Optionally, the room source video interface further includes a new control, and the method further includes:
responding to touch operation acting on the newly-built control, and displaying a local multimedia interface of the electronic terminal in the graphical user interface, wherein the local multimedia interface at least comprises a local video clip;
and selecting the house source video clip corresponding to the functional space, and displaying the house source video clip on a house source video display interface.
The embodiment of the invention also discloses a generation device of the house source with the watching route, which is characterized in that the content displayed through the graphical user interface of the electronic terminal at least comprises a house source interface corresponding to the target house source and a route design control; wherein the target source comprises at least one functional space, the apparatus comprising:
the house source video display module responds to touch operation acting on the route design control and displays a house source video interface corresponding to the target house source, and the house source video interface comprises a flow control and at least one house source video clip corresponding to the functional space;
the house source video selection module responds to the selection operation aiming at the house source video clip, selects at least one target house source video clip and acquires the sequence information corresponding to the at least one target house source video clip;
the preview interface display module responds to touch operation acting on the flow control and displays a preview interface in the graphical user interface, wherein the preview interface comprises an initial viewing path corresponding to the sequence information;
and the roaming route generating module is used for generating a house source watching route matched with the target house source according to the initial watching route.
Optionally, the graphical user interface further displays a route design control, and the room source video display module is specifically configured to:
and responding to the touch operation acted on the route design control, and displaying a house source video interface corresponding to the target house source.
Optionally, the room source video interface further includes at least one room source video clip corresponding to the functional space, and the room source video selection module is specifically configured to:
and responding to the selection operation aiming at the room source video clip, selecting at least one target room source video clip, and acquiring the sequence information corresponding to the at least one target room source video clip.
Optionally, the house source video interface includes a flow control, and the preview interface display module is specifically configured to:
and responding to the touch operation acted on the flow control, and displaying a preview interface in the graphical user interface.
Optionally, the preview interface includes an initial viewing path corresponding to the sequence information, and the preview interface display module includes:
the initial path with watching display sub-module responds to touch operation acting on the flow control, displays a preview interface in the graphical user interface, and displays an initial path with watching corresponding to the sequence information in the preview interface;
the content playing sub-module is used for sequentially playing the target house source video clips corresponding to the video content controls in the preview interface according to the sequence information;
and the video content control adjusting submodule is used for responding to the end of the dragging operation acted on the video content control, adjusting the sequence information of the video content control in the initial watching path and obtaining a target video watching path.
Optionally, the method further comprises:
and the label editing window generating submodule is used for responding to the touch operation acting on any position in the preview interface, acquiring the initial display position of the touch operation on the graphical user interface, and displaying the label editing window aiming at the initial display position in the preview interface.
Optionally, the method further comprises:
and the initial label identification generation submodule is used for responding to a first input operation acted on the label editing window, acquiring first label information corresponding to the first input operation and generating an initial label identification corresponding to the initial display position.
Optionally, the method further comprises:
the tag name information acquisition submodule is used for responding to a first input operation aiming at the tag name control and acquiring tag name information;
the tag content information acquisition submodule is used for responding to a second input operation aiming at the tag content control and acquiring tag content information;
the tag picture information acquisition sub-module is used for responding to a third input operation aiming at the added picture control and acquiring tag picture information;
and the label link information acquisition submodule is used for responding to the fourth input operation aiming at the adding link control and acquiring the label link information.
Optionally, the method further comprises:
and the target label identification generation submodule is used for responding to a second input operation aiming at the label editing window, acquiring second label information corresponding to the second input operation, and generating the target label identification corresponding to the initial display position according to the second label information.
Optionally, the method further comprises:
and the label identification deleting submodule is used for responding to the touch operation acted on the label deleting control and deleting the initial label identification corresponding to the initial display position in the preview interface.
Optionally, the method further comprises:
and the tag identification acquisition submodule is used for acquiring an initial display position of the touch operation on the graphical user interface, and a target video frame and a target time point which correspond to the initial display position in the current house source video clip.
Optionally, the method further comprises:
and the tag identifier issuing sub-module is used for responding to the touch operation acted on the tag issuing control and generating an initial tag identifier corresponding to the initial display position on the target video frame and the target time point acquired in the preview interface according to the first tag information.
Optionally, the method further comprises:
and the label identification dragging submodule is used for responding to the end of the dragging operation acted on the initial label identification, determining the target display position corresponding to the graphical user interface of the dragging operation, and controlling the initial label identification to move from the initial display position to the target display position.
Optionally, the method further comprises:
the target display position acquisition module is used for responding to dragging operation acting on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and displaying a deletion area in the graphical user interface if the target display position meets a preset condition;
and the initial label identification dragging and deleting submodule is used for responding to dragging the initial label identification to the deleting area and deleting the initial label identification in the preview interface.
Optionally, the target display position obtaining module is specifically configured to:
and responding to the dragging operation acted on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and displaying a deleted area in the graphical user interface if the distance between the target display position and the deleted area is smaller than a preset distance threshold.
Optionally, the target display position obtaining module is specifically configured to:
and responding to the dragging operation acted on the initial label identification, acquiring the dragging operation time of the initial label identification in the graphical user interface corresponding to the dragging operation, and if the dragging operation time is greater than a preset time threshold, displaying a deletion area in the graphical user interface.
Optionally, the method further comprises:
the editing control switch opening sub-module is used for responding to touch operation acting on any position in the preview interface under the condition that the editing control switch is in an opening state, acquiring the initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and the editing control switch closing sub-module does not respond to the touch operation for generating the label editing window under the condition that the editing control switch is in a closed state.
Optionally, the method further comprises:
and the roaming route preview sub-module is used for responding to touch operation acted on the preview control, displaying a preview interface, acquiring an initial watching path corresponding to the roaming route and playing a target video clip corresponding to the initial watching path.
Optionally, the method further comprises:
and the initial tag identification display submodule displays the initial tag identification corresponding to the time point and the associated information interface corresponding to the tag identification when the playing progress reaches the time point.
Optionally, the method further comprises:
and the newly-built room source video clip submodule is used for displaying the information room source video clip on the room source video interface.
Optionally, the newly created house source video clip sub-module is specifically configured to:
responding to touch operation acting on the newly-built control, displaying an information acquisition interface in the graphical user interface, and displaying a current function space in the information acquisition interface;
and responding to the acquisition operation aiming at the information acquisition interface, acquiring the house source video clip corresponding to the current functional space, and displaying the house source video clip on the house source video interface.
Optionally, the newly created house source video clip sub-module is specifically configured to:
responding to touch operation acting on the newly-built control, and displaying a local multimedia interface of the electronic terminal in the graphical user interface, wherein the local multimedia interface at least comprises a local video clip;
and selecting the house source video clip corresponding to the functional space, and displaying the house source video clip on a house source video display interface.
Optionally, the roaming route generating module is specifically configured to:
and generating a house source watching route matched with the target house source according to the initial watching path.
Optionally, the roaming route generating module includes:
and the roaming route editing interface display submodule is used for responding to the touch operation acted on the roaming route generation control and displaying the roaming route editing interface.
And the roaming route generation submodule is used for responding to the touch operation acting on the issuing and sharing control, and generating a house source watching route aiming at the target house source by adopting the roaming route and the roaming point information.
And the roaming point information style switching submodule is used for responding to the touch operation aiming at the positioning identifier, determining target roaming point information corresponding to the touch operation and switching the target roaming point information from a first display style to a second display style.
And the roaming point information preview submodule is used for responding to the touch operation acted on the preview effect control, determining target roaming point information corresponding to the touch operation and playing a target house source video clip corresponding to the target roaming point information.
The embodiment of the invention also discloses electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory finish mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Also disclosed are one or more computer-readable media having instructions stored thereon, which, when executed by one or more processors, cause the processors to perform a method according to an embodiment of the invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a graphical user interface is provided through an electronic terminal, the content displayed in the graphical user interface at least comprises a house source interface corresponding to a target house source and a route design control, wherein the target house source comprises at least one functional space, the electronic terminal can respond to touch operation acted on the route design control to display a house source video interface corresponding to the target house source, at least one target house source video clip is selected in the house source video interface, and a flow control is touched, so that an initial watching path can be generated in a preview interface to realize modular management of the house source video clip, and finally a house source watching path matched with the target house source can be generated according to the initial watching path, so that a watching user can perform custom design on the watching path of the target house source by selecting the house source video clip corresponding to the functional space of the target house source, in the house source display process, the user can be seen and the house finding user can see the route according to the house source to browse the house source, the house source information display mode is enriched, the route can be seen through the house source, the house finding user can flexibly adjust the house source information display according to the self demand, and the demands of different users can be effectively met.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for generating a house source looking route according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a house source interface provided in an embodiment of the invention;
FIG. 3 is a schematic diagram of a house source video interface provided in an embodiment of the invention;
FIG. 4 is a schematic illustration of a preview interface provided in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a label interaction within a preview interface provided in an embodiment of the invention;
FIG. 6 is a schematic illustration of a preview interface provided in an embodiment of the present invention;
FIG. 7 is a schematic diagram within a tour edit interface provided in an embodiment of the invention;
FIG. 8 is a schematic diagram of a roaming route editing interface and a preview interface provided in an embodiment of the invention;
FIG. 9 is a schematic diagram of tag identification interaction within a preview interface provided in an embodiment of the present invention;
fig. 10 is a block diagram of a generation apparatus for a house source viewing route according to an embodiment of the present invention;
fig. 11 is a block diagram of an electronic device provided in an embodiment of the invention;
fig. 12 is a schematic diagram of a computer-readable medium provided in an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
As an example, with the development of internet technology and the change of life rhythm, more and more room-finding users tend to find the room source of the heart instrument in an online room-looking manner. In order to provide real and effective house source information for house-finding users, landlords (house source brokers and the like, the following users with visitors) need to publish more comprehensive house source information on relevant platforms. In the correlation technique, most of room sources release platforms, not only adopt the photo to see the room, the video sees the room and VR scene sees the mode of room and carry out the online room of seeing, still provide online live broadcast's mode and supply to look for the room user and select.
The online live broadcast is the most intuitive online house-viewing mode, and can provide the most real house source condition for a house-finding user. Different room-finding user groups have different room source requirements, and the problem that how to meet the different requirements of different room-finding users more efficiently becomes the problem to be solved. The online live broadcast in the current stage is mostly live broadcast in VR scenes of house resources, introduces the whole VR scenes of the house resources, records the VR scenes in advance, cannot provide personalized contents and routes for users with different watching needs, and therefore the users cannot acquire house resource information to be known quickly and accurately, and find houses are poor in experience. For the off-line room-finding user, when the off-line room-finding user fails to participate in the real-time live broadcasting, the off-line room-finding user can only watch recorded broadcasting or playback, and cannot communicate with other room-finding users or the watching-with-tape user, and when the watching-with-tape user wants to select out the wonderful content in the live broadcasting and send the wonderful content to the off-line room-finding user for watching, the off-line room-finding user has great difficulty.
In view of the above, one of the core invention points of the embodiment of the present invention is that after responding to an instruction of a user designing a house source watching line, a terminal may display house source video clips corresponding to each functional space of a house source in a house source video interface, then generate an initial watching path in a preview interface according to sequence information of a target house source video clip selected by a watching user, and finally generate a house source watching line matched with the target house source according to the initial watching path, so that in a process of using the house source watching line, on one hand, content with watching is processed in a modularized manner, and spliced according to different requirements of the house-searching user, so as to generate a personalized house source watching line to be shared with the house-searching user, and on the other hand, communication efficiency of house source information is improved, so that the house-searching user can quickly know the house source according to own requirements.
For the convenience of those skilled in the art to understand the technical solutions related to the embodiments of the present invention, some terms used in the embodiments of the present invention are explained and explained below:
the house source interface is a detail display interface of a target house source, when a user who looks in the area uploads house source data on the client side, the terminal can generate a house source interface according to the house source data, when the user who looks in the area wants to introduce the house source in more detail, a route design control on the house source interface can be clicked, and a route which looks in the house source area is designed, so that the user who looks in the house can know the house source in more detail.
The functional space refers to each entity area inside the target room source, and can include a bedroom, a living room, a study room, a toilet and the like in the target room source.
And the house source video interface is used for storing and displaying the house source video clips, and the house source video interface can be displayed after a user clicks the route design control on the house source interface, and the house source video interface comprises the house source video clips of the target house source and corresponds to the functional space of the house source.
The house source video clip is a video with watch for introducing each functional space of a target house source, can be recorded before uploading the house source, and can also be a recorded broadcast clip when VR house source live broadcast is carried out. And the names of the functional spaces corresponding thereto, such as "bedroom 1", "bedroom 2", "living room", "kitchen", etc., may be displayed on the house source video clip.
On one hand, the house source watching route can be personalized by the house source watching user according to the house source characteristics, so that the house source watching route of the target house source is convenient for the house finding user to know the specific information of the target house source; on the other hand, the house searching user can conveniently know the details of the target house source and quickly know the specific information of each functional space of the target house source.
Specifically, referring to fig. 1, a flow chart of steps of a method for generating a house source with viewing route provided in the embodiment of the present invention is shown, where contents displayed through a graphical user interface of an electronic terminal at least include a house source interface corresponding to a target house source and a route design control; the target room source includes at least one functional space, and specifically may include the following steps:
in the embodiment of the present invention, the content displayed on the graphical user interface of the electronic terminal at least includes a house source interface corresponding to the target house source and a route design control, where the target house source includes at least one functional space, and the functional space is each area inside the target house source, and is a bedroom, a living room, a study room, a toilet, and the like in the target house source.
In the embodiment of the present invention, for the watching-with-user, the terminal to which the watching-with-user belongs may be a first electronic terminal (the terminal described below), and an application program, such as a life class application program, an audio application program, a game application program, and the like, may run in the terminal. The life-type application programs can be further divided according to different types, such as house renting and selling application programs, car renting and selling application programs, home service application programs, entertainment application programs and the like. In the embodiment of the present application, a house renting and selling application is run on a mobile terminal as an example, and a watching user may upload house source data corresponding to a corresponding house source to an online through the house renting and selling application so as to generate a corresponding house source interface, which is understood that the present invention is not limited thereto.
For the room-finding user, the terminal to which the room-finding user belongs can be a second electronic terminal, and an application program can also run in the second electronic terminal, so that corresponding room source information can be browsed on line, a room source can be found, a link of a room source with a watching route can be received, the room source with the watching route can be watched, and the like.
For the renting and selling house application program, in the process of running the renting and selling house program at the terminal, the viewing-with-viewing user can upload the house source information in the application program, after the uploading is successful, the terminal can display the house source interface of the house source on the graphical user interface of the terminal, and the house-finding user or the viewing-with-viewing user can browse the house source information in the corresponding house source interface, for example: price, house type figure, photo or video of each functional space, etc. In the embodiment of the invention, when the terminal runs the application program and displays the house source interface through the graphical user interface, the route design control can be provided on the house source interface, so that the user with the watch can edit the house source with the watch route through the route design control, so that when the user with the watch carries out house source with the watch, the corresponding house source can be introduced according to the house source with the watch route, and when the user with the watch browses the corresponding house source, the house source information can be quickly acquired through the house source with the watch route.
In specific implementation, in the process of running the application program, the user with watch can design the house source with watch route of the target house source according to the house source characteristics or the self requirement, or after knowing the requirement of the user with look for house, the user can select the target house source video of the functional space that the user with look for house wants to know according to the requirement of the user with look for house, and the house source with watch route is customized, so that the user with look for house can know the house source information more efficiently. After a user enters a room source video interface corresponding to a target room source through a touch route design control, a room source video clip can be selected on the room source video interface, and the name of a function space corresponding to the room source video clip can be displayed after the room source video clip is displayed on the room source video interface.
Optionally, the room source video interface further includes a new control for adding a room source video clip of the target room source, and when the terminal responds to the touch operation applied to the new control, the information acquisition interface can be displayed in the graphical user interface, the current function space is displayed in the information acquisition interface based on the visual field picture acquired by the camera configured by the terminal, then, the acquisition operation aiming at the information acquisition interface can be responded, the house source video clip corresponding to the current functional space is acquired, and the house source video clip is displayed on the house source video display interface, so that when the existing house source video clip in the house source video interface is not enough to express the house source information, or when the house source video clips corresponding to the function spaces are lacked, the user can acquire the corresponding function spaces in real time with watching to obtain the corresponding house source video clips, so that the integrity and the richness of the house source information are guaranteed.
In addition, the terminal can also respond to touch operation acting on the newly-built control, a local multimedia interface of the terminal is displayed in the graphical user interface, the local multimedia interface can comprise a local video clip, after the area-watching user selects a certain local video clip, the terminal can take the local video clip selected by the area-watching user as a house source video clip corresponding to a certain functional space, and the house source video clip is displayed on the house source video interface, so that when the existing house source video clip in the house source video interface is insufficient to express house source information or lacks the house source video clip corresponding to the functional space, the area-watching user can select from the existing video clip, and the mode that the user adds the video clip is enriched.
In one example, referring to fig. 2, which shows a schematic diagram of a house source interface provided in the embodiment of the present invention, the content displayed through the graphical user interface of the terminal may include at least a house source interface 20 corresponding to a target house source and a routing control 210. When the user touches the route design control 210, the user can jump to the room source video interface 30. Referring to fig. 3, a schematic diagram of the house source video interface provided by the present embodiment is shown. After the user touches the route design control 210 on the room source interface 20, the terminal may respond to the touch operation applied to the route design control 210 to display a room source video interface 30 corresponding to the target room source, where the room source video interface 30 includes a flow control 310 and at least one room source video clip 320 corresponding to the functional space.
In one example, referring to fig. 3, a schematic diagram of a house source video interface provided by the present embodiment is shown, a new control 330 is also included in the room source video interface, when the user sees that the room source video interface lacks a room source video clip of a functional space of the target room source, or when a new house source video clip is added to the house source video interface, the newly built control can be clicked, when the terminal responds to the touch operation acted on the newly-built control, the information acquisition interface can be displayed, and the user can watch the acquisition visual field of the acquisition device, real acquisition scenes corresponding to the current observation visual angles are displayed in real time in the information acquisition interface, and current functional spaces are displayed in the information acquisition interface, and then, the terminal can respond to the acquisition operation acting on the information acquisition interface, acquire the house source video clip corresponding to the current functional space, and finally display the house source video clip on the house source video display interface. After the user clicks the new control 330, a local multimedia interface of the terminal can be displayed in the graphical user interface, the local multimedia interface at least includes a local video clip, the local video clip is a room source video clip which has been acquired before, a room source video clip corresponding to the current functional space is selected, and the room source video clip is displayed on the room source video display interface.
in the embodiment of the invention, the watching-with-watching user can select the target house source video clip according to the house source characteristics or the house searching user needs to design the house source watching-with-watching route in a personalized way.
In the specific implementation, in the process that the watching user selects the target room source video clip on the room source video interface, the terminal can acquire the sequence information corresponding to the target room source video clip, display the sequence number corresponding to the sequence information on the selected target video clip according to the sequence information that the watching user selects the room source video clip, and cancel the selection of the room source video clip when the watching user clicks the selected room source video clip again. For example, when the user selects "bedroom 1" first, a place of the room source video segment named "bedroom 1" is marked as sequence number 1, then the user selects "living room", a place of the room source video segment named "living room" is marked as sequence number 2, and so on; and when the user clicks the bedroom 1 again, the selection is cancelled, the terminal responds to the cancellation, the serial number 1 of the room source video clip can be cancelled, and the serial number 2 of the living room is switched to the serial number 1, so that the terminal can determine at least one target room source video clip according to the selection operation of the watching user on the room source video in the room source video interface, and acquire corresponding sequence information so as to generate a corresponding watching path.
103, responding to a touch operation acting on the flow control, and displaying a preview interface in the graphical user interface, wherein the preview interface comprises an initial viewing path corresponding to the sequence information;
in the embodiment of the invention, the watching user can preview the house source watching route through the initial watching path and edit the initial watching path as required, so that personalized adjustment of house source watching content is realized.
In a specific implementation, after the user selects a target house source video and touches the flow control, the terminal can respond to the touch operation acting on the flow control and jump to a preview interface. In the preview interface, an initial watching path generated by the terminal according to the sequence information of the target house source video clip selected by the watching user can be displayed, and a video content control corresponding to the target house source video clip is included in the initial watching path. And in the previewing process, the terminal plays the target house source video clip corresponding to the video content control according to the sequence information. The flow control may be a "next step" or the like, which is not limited in this respect.
Specifically, the terminal can respond to the dragging operation acting on the video content control, move the video content control selected by the viewing user to the target position, and when the dragging operation of the viewing user is finished, the sequence information of the video content control in the initial viewing path can be adjusted, so that the target viewing path is obtained, and the personalized adjustment of the initial viewing path is realized.
Optionally, in the process of previewing the initial look-with-see path, the look-with-see user may add a tag identifier at any position within the preview interface. Clicking a position to which a label mark is added in a preview interface, responding to touch operation acting on the position by a terminal, acquiring an initial display position of the touch operation on a graphical user interface, displaying a label editing window aiming at the initial display position in the preview interface, inputting first label information in the label editing window, responding to the first input operation acting on the label editing window by the terminal, acquiring first label information corresponding to the first input operation, and clicking a label issuing control after a user edits the label information, so that the initial label mark corresponding to the initial display position can be generated according to the label information.
The label editing window can be used for editing label information, and comprises a plurality of label information controls including a label name control, a label content control and the like, and the label content control can also comprise a label picture control, a label link control and the like. When a user wants to edit the label information, the label information can be input after clicking the label information controls respectively, the label information can be label names, label contents, pictures or links and the like, the method is not limited by the invention, then, the terminal responds to the input operation of the label information controls to obtain the first label information corresponding to the first input operation, and after the user edits the label information, the label publishing control is clicked, so that the initial label identification corresponding to the initial display position can be generated according to the label information, the purpose of detailed explanation of the house source information is achieved, and the integrity of the house source information is ensured.
Additionally, an edit control switch is included within the preview interface. When the edit control switch is in the on state, the terminal can respond to the touch operation acting on any position in the preview interface to display the label edit window, and when the edit control switch is in the off state, the terminal does not respond to the touch operation.
In one example, referring to fig. 4, which shows a schematic diagram of the preview interface provided in the embodiment of the present invention, after the user clicks the flow control 310 in fig. 3, the user can jump to the preview interface 40, within the preview interface 40, including the initial video path 410, and the video content control 4101 in the initial video path, as the watching user enters the preview interface 40, the terminal can play the target room source video segment corresponding to the video content control 4101 according to the sequence in the initial video path 410, when watching the user's request for a room, or the target video segment is selected in a wrong order, and the order of certain video content controls 4101 is to be changed, the video content control can be long-pressed and dragged to a target position in the initial video path, that is, the order of the video content controls can be changed, and the terminal generates a target video path according to the order.
In addition, referring to fig. 5, a schematic diagram of the label interaction in the preview interface provided in the embodiment of the present invention is shown. Within the preview interface 50, an edit control switch 510 is included, and when the viewing user wants to explain a certain position of the target house source in the process of previewing the initial viewing path, an initial label mark can be added at the position so as to enrich the house source information. Firstly, a user needs to start an editing control switch 510, and then clicks the position in the preview interface where the initial label mark needs to be added, so that a label editing window 520 can be displayed; the user can input the label information in the label editing window with a watch, and finally click the publishing control 530, so that the initial label identifier can be generated. In this example, the label edit window includes a label name control 5201 and a label content control 5202, and includes a label picture control 52021 and a label link control 52022 in the label content control. For example, when the user wants to specify a decoration on a wall in the currently playing video content control, the user can click the position of the decoration in the current preview interface to display the label editing window 520, click the label name control 5201 in the label editing window 520 to input "decoration", and then click the label content control to input "purchase in 2019 without damage. "; then, a picture adding control can be clicked to upload the detailed photos of the decorative picture, and a label link control can be clicked to input a purchasing link for purchasing the decorative picture so as to facilitate a house-finding user to inquire the value of the decorative picture; finally, clicking on the release control 530 generates an initial tag identification 540.
In an optional embodiment, when the viewing user wants to add a tag identifier at a certain position of the current preview interface, the terminal responds to a click operation of the viewing user at the certain position, the preview of the initial viewing path is suspended, the terminal can acquire a time point and a video frame where the click operation occurs in the preview progress, and after the tag identifier information is edited, the terminal clicks the tag publishing control, and then the initial tag identifier is generated at the time point and on the video frame where the click operation occurs. Therefore, in the process that the house searching user carries out house source with a watching route, the label identification is displayed when the preview progress reaches the time point of the label identification, and the phenomenon that the watching experience is influenced due to too much content displayed on a graphical user interface is avoided.
After the terminal generates the initial tag identification in the current preview interface, the user can move the initial tag identification with the terminal. When the user with a watch wants to change the position of the initial label identifier, the terminal can perform dragging operation on the initial label identifier, the terminal responds to the end of the dragging operation acting on the initial label identifier, the dragging operation is determined to be at the target display position corresponding to the graphical user interface, the user with a watch controls the initial label identifier to move from the initial display position to the target display position, the movement of the initial label identifier can be completed, and the target label identifier is generated in the current preview interface.
In addition, when the user wants to modify the tag content, the user can click the initial tag identifier in the preview interface, so that a tag editing window can be displayed, and the tag information can be edited for the second time. And in the window where the initial label identification is edited for the second time, a deletion control can be displayed. When a user wants to delete the initial label identification in the label editing window with a watch, the initial label identification is clicked, the label editing window can be displayed, and the terminal responds to touch operation acting on the label deleting control, so that the initial label identification can be deleted in the preview interface, the purpose of secondary editing of the label is achieved, and the accuracy of the house source information is improved.
In one example, when the user wants to add a tag identifier in the current preview interface, a click operation is performed at a position where the tag identifier is to be added, and the terminal may acquire a time point when the click operation occurs in the preview progress and the current video frame, and generate an initial tag identifier at the time point and on the video frame. For example, if the time point at which the click operation of the user with watching occurs is 5 minutes and 20 seconds in the preview progress of the entire initial path with watching, the terminal may obtain the time point and the current video frame, pause the playing of the target room source video corresponding to the current video content control, and display the label editing window. After the user issues the initial tag identifier through the tag editing window, the initial tag identifier can be generated at the current time point of "5 minutes and 20 seconds" and in the current video frame. When the roaming route is previewed and the roaming point information is previewed, when the previewing progress reaches the time point of adding the initial tag identification, the tag identification corresponding to the initial tag identification and the associated information interface are displayed. In addition, referring to fig. 5, when the position of the initial tab identifier is to be changed, the initial tab identifier may be dragged from the initial display position 501 to the target display position 502, and the terminal may generate the target tab identifier 560 according to the target position information. When the user wants to edit the tag identifier for the second time, the terminal may click the target tag identifier 560, respond to the click operation, and display the tag editing window 520 again. After the user inputs the label information in the label information control, the target label identifier 560 can be regenerated by clicking the publishing control 530. When the secondary editing of the initial tag identifier is performed, the tag editing window 520 further includes a deletion control 550, and when the user wants to delete the target tag identifier 560, the terminal clicks the deletion control 550, and then deletes the target tag identifier 560 in response to the clicking operation.
In the process of dragging the initial label identification, the terminal can acquire a target display position of the initial label identification in the graphical user interface, if the target display position meets a preset condition, a deleting area is displayed in the graphical user interface, and a user with a watch can drag the initial label identification to the deleting area to delete the initial label identification. The preset condition may be a preset distance threshold or a preset time threshold, and the like, which is not limited in the present invention.
In a specific implementation, when the time for dragging the initial tag identifier exceeds a preset time threshold, or the distance between the target display position for dragging the initial tag identifier and the deletion area is less than a preset distance threshold, the deletion area is displayed.
In addition, a roaming route generation control is further included in the preview interface, and the terminal responds to touch operation acting on the roaming route generation control and generates a house source watching route matched with the target house source according to the initial watching route.
In an example, referring to fig. 6, which shows a schematic diagram of a preview interface provided in an embodiment of the present invention, in the process that a user drags an initial tag identifier 640 in the preview interface 60, the terminal may determine whether to display the deletion area 610 according to a preset distance threshold or a preset time threshold. For example, when the preset condition is a preset distance threshold, the preset distance threshold may be set to be 0-1cm, and when the user drags the initial tab identifier 640 with watching and the distance between the initial tab identifier and the deletion area 610 is less than or equal to 1cm, the deletion area 610 is displayed; when the preset condition is a preset time threshold, the preset time threshold may be set to 2s, and when the initial tab identifier 640 is dragged by the user with watching, and the dragging operation for the initial tab identifier 640 is greater than or equal to 2s, the deletion area 610 is displayed. When the delete area 610 is displayed, the initial tab identification 640 can be deleted by dragging the initial tab identification 640 into the delete area 610 with the watch user. In addition, in the preview interface 60, a roaming route generation control 620 is further included, and when the user with watch clicks the roaming route generation control 620, the terminal responds to the click operation, and then the house source with watch route of the target house source can be generated according to the initial house source with watch route.
And step 104, generating a house source watching route matched with the target house source according to the initial watching route.
In the embodiment of the invention, the user with watch can preview and modify the edited roaming route at the roaming route editing interface, and finally the terminal adopts the editing information of the roaming route editing interface to generate the house source line with watch route, so that the purpose of confirming the editing information is achieved.
In a specific implementation, the terminal responds to a touch operation acting on the roaming route generation control, and displays a roaming route editing interface, wherein the roaming route editing interface at least comprises roaming point information corresponding to the video content control, a house type diagram corresponding to a target house source and a roaming route which is displayed in the house type diagram and corresponds to an initial watching path.
The roaming route comprises a positioning identifier corresponding to the roaming point information. The terminal responds to the touch operation aiming at the positioning identifier, determines the target roaming point information corresponding to the positioning identifier, and switches the target roaming point information from a first display style to a second display style, wherein the second display style can be highlighted, highlighted and the like, and the invention is not limited to this.
The roaming point information also comprises a function space name corresponding to the video content control, a preview effect control and a label name of the initial label identification, and when a user clicks the preview effect control, the target house source video clip corresponding to the roaming point information can be previewed.
The roaming route editing interface comprises a preview control, and when a user clicks the preview control, the user can jump to the preview interface to preview the roaming route. In the roaming route previewing process, when the previewing progress reaches the time point of adding the initial tag identification, the initial tag identification corresponding to the time point can be displayed.
In addition, the roaming route editing interface further comprises a publishing and sharing control, when the user touches the touch operation of the publishing and sharing control, the terminal obtains editing information of the roaming route editing interface, generates a house source line with a watch route according to the editing information, publishes the house source line with the watch route in the house source interface, and can also generate a house source line with a watch route link to share to the second electronic terminal of the house finding user.
In an example, referring to fig. 7, a schematic diagram of a roaming route editing interface provided in the embodiment of the present invention is shown, and when the user clicks the roaming route generation control 620 in fig. 6, the roaming route editing interface 70 is displayed. In the roaming route editing interface 70, a home type diagram 710 of a target house source and roaming point information 720 corresponding to a video content control are included, a roaming route 7101 generated according to an initial viewing path is arranged on the home type diagram, a positioning identifier 7102 corresponding to the roaming point information is arranged on the roaming route, when a user clicks the positioning identifier 7102, the roaming point information 720 corresponding to the positioning identifier 7102 is updated to a second display style from a first display style, for example, when the user clicks the positioning identifier of a "living room" in the home type diagram 710, the "living room" roaming point information corresponding to the user can be highlighted below a page, and the clicked positioning identifier can be highlighted.
In an example, referring to fig. 8, an interaction diagram of a roaming route editing interface and a preview interface provided in the embodiment of the present invention is shown, and the roaming point information 820 further includes a function space name 8203 corresponding to a video content control, a preview effect control 8201, and a tag name 8202 of an initial tag identifier added when an initial tape-view path is previewed, so that when a tape-view user clicks the preview effect control, a target house source video clip corresponding to the roaming point information can be previewed. For example, when the user wants to preview the target room source video clip of "bedroom", the user finds the roaming point information marked with "bedroom", clicks the preview effect control 8201 on the roaming point information, and then jumps to the preview interface 801, and then the terminal may determine the video content control corresponding to "bedroom", and play the target room source video clip corresponding to the video content control in the preview interface 801. In the process of previewing, when the preview progress reaches the time point of adding the initial tag identification, the initial tag identification can be displayed. If the user needs to add or modify the tag identifier during the preview process, the user can click the edit control switch 840 to edit the current video content control. In the roaming route editing interface 80, a preview control 810 is further included, and when the user wants to preview the whole roaming route, the user can click the preview control 810 to preview the roaming route. In the process of previewing the roaming route, when the preview progress reaches the time point of adding the initial tag identification, the terminal can display the initial tag identification corresponding to the time point. In the roaming route editing interface 801, a publishing and sharing control 830 is included, when the editing of the user with watching is completed, the publishing and sharing control 830 can be touched, the terminal can acquire editing information of the roaming route editing interface 801, generate a house source line with watching route according to the editing information, publish the house source line with watching route in the house source interface, generate a house source line with watching route link, and share the house source line with the second electronic terminal of the user with watching.
In one embodiment, after the house source watching route is generated, the watching-with-watching user can use the route for live watching.
In a specific implementation, a preview interface of a house source with a viewing route is displayed on a terminal graphical user interface of a user with a viewing function, the preview interface comprises an initial path with the viewing function, the initial path with the viewing function comprises at least one video content control, and the video content controls respectively correspond to the function space of a target house source. When live broadcast is carried out and watching is carried out, a user with watching can explain a target house source according to a house source watching route. In the live broadcasting and watching process, the graphical user interface of the user terminal with the watching function can display a live broadcasting bullet screen and a message left by audiences in real time, when a specified function space which the user wants to know is in the bullet screen or the message left, the user with the watching function can click a video content control corresponding to the function space, and the terminal can respond to the touch operation and display a target room source video clip corresponding to the video content control. For example, there is a bullet screen indicating: "want anchor to introduce kitchen. The user can click the video content control corresponding to the kitchen to display the target house source video clip of the kitchen and explain, so that the display of the house source information can be flexibly adjusted, and the requirements of different users can be effectively met.
The display of the initial label identification is triggered when the playing progress reaches the time point of the initial label identification, and when the initial label identification is displayed, the initial label identification can be displayed above the initial label identification, so that the initial label identification corresponding to the time point and the associated information interface corresponding to the label identification are displayed, and the convenience of explaining the house source by the user with the watch is improved.
In one embodiment, the design of the house source with the watching route can be changed by the user with the watching according to the requirement of the user with the house searching. When the house source with the watching route is redesigned by the house source with the watching user, the editing information of the house source with the watching route generated by the house source with the watching user at the last time can be reserved on the house source video interface, the preview interface and the roaming route editing interface and displayed in the graphical user interface, so that the house source with the watching route is prevented from being redesigned when being changed, and the convenience of designing the house source with the watching route is improved.
In a specific implementation, when the house finding user has different requirements, the design of the house source watching route can be changed by the watching user according to the requirements of the watching user. Firstly clicking a route design control, displaying a target room source video clip and sequence information selected last time in a room source video interface, enabling a user with watching to select to directly click a flow control to enter a preview interface to edit an initial path with watching, enabling the user with watching to long press and drag a video content control in the preview interface, changing the position of the video content control in the initial path with watching according to the requirements of a client, and generating a target path with watching. Or after the house source video clip is reselected according to the requirements of the house finding user, the process control is clicked to enter the preview interface. The label identification added last time by the user with the watch is still reserved in the preview interface, the user with the watch can add the label identification again according to the requirement of the user for finding the house so as to display more house source details, then the user with the watch clicks the roaming route generation control, and the terminal displays the roaming route editing interface according to the clicking operation. The user with the watch can preview the roaming route and information of each roaming point on the roaming route editing interface so as to confirm the editing information, finally the user with the watch can click the publishing and sharing control, the terminal responds to the click operation to generate a house source line with the watch, which is matched with a target house source, and publishes the house source line with the watch in the house source interface, a link of the house source line with the watch can be generated, the house source line with the watch is shared to a second electronic terminal of the user finding the house, and the personalized house source line with the watch can be customized according to different customer requirements.
In one embodiment, when the room finding user runs the renting and selling room application program on the second electronic terminal, the room source of the heart instrument can be found according to the room source information in the room source interface. When the house searching user wants to further know the details of the target house source, the house source with the watching route of the target house source can be watched in the house source interface. When the house finding user wants to know the interested functional space, the user can click the video content control corresponding to the functional space on the target watching path, and then the user can skip to play the target house source video clip of the functional space, so that the house finding user can perform positioning browsing and can more efficiently acquire the house source information.
In an optional embodiment, after the room finding user determines the target room source, the room finding user can send the room finding requirement to the user with the watch by further knowing details of the room source, the user with the watch designs a room source and watching route according to the room finding requirement, and the room source and watching route is linked and shared to the second electronic terminal of the room finding user. And after the house finding user receives the house source with watching route link in the second electronic terminal, clicking the house source with watching route link, and skipping to a house source interface of a target house source in the house renting and selling application program to watch the house source with watching route.
When the watching progress reaches the time point of the initial label identification, the display of the initial label identification is triggered, and when the initial label identification is displayed, the label name information and the associated information interface of the label identification are displayed above the initial label identification. And the room finding user clicks the initial label identification to display a user label editing window, the user label editing window comprises the detail overview information of the initial label identification, and a message leaving control is arranged, and the room finding user inputs message information in the message leaving control to participate in message leaving interaction. And if the viewing progress bar does not touch the adding time point of the initial tag identification, not displaying the initial tag identification. In the process of watching the house source with the watching route, the house finding user can leave a message through the message leaving control in the label identification, so that the house finding user and the house source with the watching route or other off-line users can carry out asynchronous communication, the positioning browsing corresponding to the pictures and texts is realized, and the communication efficiency is improved.
In an example, referring to fig. 9, a schematic diagram of tag identification interaction in a preview interface in a house renting and selling application program of a second electronic terminal provided in the embodiment of the present invention is shown. In the house searching process, in the house source watching process with the watching route, when the watching progress reaches the time point of the initial tag identifier, the display of the tag identifier is triggered, an initial tag identifier 960 can be displayed in the preview interface 90 in the house renting application program of the second electronic terminal, and the tag name of the initial tag identifier 960 can be displayed above the initial tag identifier 960. When the user wants to know the details of the initial tab label 960 or to leave a message, the user clicks on the initial tab label 960 and the details overview window 920 is displayed. The detail overview window includes information of the tag content 930 and the tag picture 940 of the initial tag identifier, and a message control 950, and the user can enter message information in the message control 950, so as to perform asynchronous communication with the watching user or other offline users. For example, when the user views up to 5 minutes and 12 seconds, the added initial tag identification 960 may be displayed, over which the tag name "decorative painting" may be displayed. The find room user clicks on the initial tab designation 960, which displays the details overview window 920. In detail overview window 920, tag content 930 is included, as "buy 2019, without damage. "; the label picture can also be included and is a detail picture of the decorative picture; the purchase link of the decorative picture can also be included, and/or other information. After the house finding user views the information, the house finding user can ask questions or leave a message in the message leaving control, asynchronous communication with the house finding user or other off-line users is achieved, and communication efficiency is improved.
In the embodiment of the invention, a graphical user interface is provided through a terminal, the content displayed in the graphical user interface at least comprises a house source interface corresponding to a target house source and a route design control, wherein the target house source comprises at least one functional space, the terminal can respond to touch operation acted on the route design control to display a house source video interface corresponding to the target house source, at least one target house source video clip is selected in the house source video interface, and a flow control is touched, so that an initial watching path can be generated in a preview interface to realize modular management of the house source video clip, and finally, a house source watching path matched with the target house source can be generated according to the initial watching path, so that a watching user can carry out custom design on the watching path of the target house source by selecting the house source video clip corresponding to the functional space of the target house source, in the house source display process, the user can be seen and the house finding user can see the route according to the house source to browse the house source, the house source information display mode is enriched, the route can be seen through the house source, the house finding user can flexibly adjust the house source information display according to the self demand, and the demands of different users can be effectively met.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 10, which is a block diagram illustrating a structure of a room source generation apparatus with a viewing route provided in an embodiment of the present invention, contents displayed through a graphical user interface of an electronic terminal at least include a room source interface corresponding to a target room source and a route design control; the target room source includes at least one functional space, and may specifically include the following modules:
a house source video display module 1001, configured to display a house source video interface corresponding to the target house source in response to a touch operation performed on the route design control, where the house source video interface includes a flow control and at least one house source video clip corresponding to the functional space;
the room source video selection module 1002 is configured to select at least one target room source video segment in response to a selection operation for the room source video segment, and acquire sequence information corresponding to the at least one target room source video segment;
a preview interface display module 1003, configured to display a preview interface in the graphical user interface in response to a touch operation applied to the process control, where the preview interface includes an initial viewing path corresponding to the sequence information;
and a roaming route generating module 1004 for generating a house source watching route matched with the target house source according to the initial watching route.
In an optional embodiment, the graphical user interface further displays a route design control, and the house source video display module 1001 is specifically configured to:
and responding to the touch operation acted on the route design control, and displaying a house source video interface corresponding to the target house source.
In an optional embodiment, the room source video interface further includes at least one room source video clip corresponding to the functional space, and the room source video selection module 1002 is specifically configured to:
and responding to the selection operation aiming at the room source video clip, selecting at least one target room source video clip, and acquiring the sequence information corresponding to the at least one target room source video clip.
In an optional embodiment, the house source video interface includes a flow control, and the preview interface display module 1003 is specifically configured to:
and responding to the touch operation acted on the flow control, and displaying a preview interface in the graphical user interface.
In an optional embodiment, the preview interface includes an initial viewing path corresponding to the sequence information, and the preview interface presenting module 1003 includes:
the initial path with watching display sub-module responds to touch operation acting on the flow control, displays a preview interface in the graphical user interface, and displays an initial path with watching corresponding to the sequence information in the preview interface;
the content playing sub-module is used for sequentially playing the target house source video clips corresponding to the video content controls in the preview interface according to the sequence information;
and the video content control adjusting submodule is used for responding to the end of the dragging operation acted on the video content control, adjusting the sequence information of the video content control in the initial watching path and obtaining a target video watching path.
In an alternative embodiment, further comprising:
and the label editing window generating submodule is used for responding to the touch operation acting on any position in the preview interface, acquiring the initial display position of the touch operation on the graphical user interface, and displaying the label editing window aiming at the initial display position in the preview interface.
In an alternative embodiment, further comprising:
and the initial label identification generation submodule is used for responding to a first input operation acted on the label editing window, acquiring first label information corresponding to the first input operation and generating an initial label identification corresponding to the initial display position.
In an alternative embodiment, further comprising:
the tag name information acquisition submodule is used for responding to a first input operation aiming at the tag name control and acquiring tag name information;
the tag content information acquisition submodule is used for responding to a second input operation aiming at the tag content control and acquiring tag content information;
the tag picture information acquisition sub-module is used for responding to a third input operation aiming at the added picture control and acquiring tag picture information;
and the label link information acquisition submodule is used for responding to the fourth input operation aiming at the adding link control and acquiring the label link information.
In an alternative embodiment, further comprising:
and the target label identification generation submodule is used for responding to a second input operation aiming at the label editing window, acquiring second label information corresponding to the second input operation, and generating the target label identification corresponding to the initial display position according to the second label information.
In an alternative embodiment, further comprising:
and the label identification deleting submodule is used for responding to the touch operation acted on the label deleting control and deleting the initial label identification corresponding to the initial display position in the preview interface.
In an alternative embodiment, further comprising:
and the tag identification acquisition submodule is used for acquiring an initial display position of the touch operation on the graphical user interface, and a target video frame and a target time point which correspond to the initial display position in the current house source video clip.
In an alternative embodiment, further comprising:
and the tag identifier issuing sub-module is used for responding to the touch operation acted on the tag issuing control and generating an initial tag identifier corresponding to the initial display position on the target video frame and the target time point acquired in the preview interface according to the first tag information.
In an alternative embodiment, further comprising:
and the label identification dragging submodule is used for responding to the end of the dragging operation acted on the initial label identification, determining the target display position corresponding to the graphical user interface of the dragging operation, and controlling the initial label identification to move from the initial display position to the target display position.
In an alternative embodiment, further comprising:
the target display position acquisition module is used for responding to dragging operation acting on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and displaying a deletion area in the graphical user interface if the target display position meets a preset condition;
and the initial label identification dragging and deleting submodule is used for responding to dragging the initial label identification to the deleting area and deleting the initial label identification in the preview interface.
In an optional embodiment, the target display position obtaining module is specifically configured to:
and responding to the dragging operation acted on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and displaying a deleted area in the graphical user interface if the distance between the target display position and the deleted area is smaller than a preset distance threshold.
In an optional embodiment, the target display position obtaining module is specifically configured to:
and responding to the dragging operation acted on the initial label identification, acquiring the dragging operation time of the initial label identification in the graphical user interface corresponding to the dragging operation, and if the dragging operation time is greater than a preset time threshold, displaying a deletion area in the graphical user interface.
In an alternative embodiment, further comprising:
the editing control switch opening sub-module is used for responding to touch operation acting on any position in the preview interface under the condition that the editing control switch is in an opening state, acquiring the initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and the editing control switch closing sub-module does not respond to the touch operation for generating the label editing window under the condition that the editing control switch is in a closed state.
In an alternative embodiment, further comprising:
and the roaming route preview sub-module is used for responding to touch operation acted on the preview control, displaying a preview interface, acquiring an initial watching path corresponding to the roaming route and playing a target video clip corresponding to the initial watching path.
In an alternative embodiment, further comprising:
and the initial tag identification display submodule displays the initial tag identification corresponding to the time point and the associated information interface corresponding to the tag identification when the playing progress reaches the time point.
In an alternative embodiment, further comprising:
and the newly-built room source video clip submodule is used for displaying the information room source video clip on the room source video interface.
In an optional embodiment, the newly created house source video clip sub-module is specifically configured to:
responding to touch operation acting on the newly-built control, displaying an information acquisition interface in the graphical user interface, and displaying a current function space in the information acquisition interface;
and responding to the acquisition operation aiming at the information acquisition interface, acquiring the house source video clip corresponding to the current functional space, and displaying the house source video clip on the house source video interface.
In an optional embodiment, the newly created house source video clip sub-module is specifically configured to:
responding to touch operation acting on the newly-built control, and displaying a local multimedia interface of the electronic terminal in the graphical user interface, wherein the local multimedia interface at least comprises a local video clip;
and selecting the house source video clip corresponding to the functional space, and displaying the house source video clip on a house source video display interface.
In an alternative embodiment, the roaming route generating module 1004 is specifically configured to:
and generating a house source watching route matched with the target house source according to the initial watching path.
In an alternative embodiment, the roaming route generation module 1004 includes:
and the roaming route editing interface display sub-module is used for responding to the touch operation acted on the roaming route generation control and displaying the roaming route editing interface.
And the roaming route generation submodule is used for responding to the touch operation acting on the issuing and sharing control, and generating a house source watching route aiming at the target house source by adopting the roaming route and the roaming point information.
And the roaming point information style switching submodule is used for responding to the touch operation aiming at the positioning identifier, determining target roaming point information corresponding to the touch operation and switching the target roaming point information from a first display style to a second display style.
And the roaming point information preview submodule is used for responding to the touch operation acted on the preview effect control, determining target roaming point information corresponding to the touch operation and playing a target house source video clip corresponding to the target roaming point information.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In addition, an electronic device is further provided in an embodiment of the present invention, as shown in fig. 11, and includes a processor 1101, a communication interface 1102, a memory 1103 and a communication bus 1104, where the processor 1101, the communication interface 1102 and the memory 1103 complete communication with each other through the communication bus 1104,
a memory 1103 for storing a computer program;
the processor 1101 is configured to implement the following steps when executing the program stored in the memory 1103:
responding to touch operation acted on the route design control, and displaying a house source video interface corresponding to the target house source, wherein the house source video interface comprises a flow control and at least one house source video clip corresponding to the functional space;
responding to the selection operation aiming at the room source video clips, selecting at least one target room source video clip, and acquiring sequence information corresponding to the at least one target room source video clip;
responding to touch operation acting on the flow control, and displaying a preview interface in the graphical user interface, wherein the preview interface comprises an initial viewing path corresponding to the sequence information;
and generating a house source watching route matched with the target house source according to the initial watching path.
In an alternative embodiment, the initial look-along path includes a video content control corresponding to the target house source video clip, and the preview interface includes a roaming route generation control, including:
responding to touch operation acting on the roaming route generation control, and displaying a roaming route editing interface, wherein the roaming route editing interface at least comprises roaming point information corresponding to the video content control, a house type graph corresponding to the target house source and a roaming route which is displayed in the house type graph and corresponds to the initial watching path;
and generating a house source watching route matched with the target house source according to the roaming route and the roaming point information, wherein the roaming point information comprises a function space name and a label name corresponding to the video content control.
In an alternative embodiment, the tour editing interface includes a publish-share control, including:
responding to the touch operation acting on the issuing and sharing control, and generating a house source watching route aiming at the target house source by adopting the roaming route and the roaming point information.
In an optional embodiment, the roaming route includes a location identifier corresponding to roaming point information, and includes:
and responding to the touch operation aiming at the positioning identifier, determining target roaming point information corresponding to the touch operation, and switching the target roaming point information from a first display style to a second display style.
In an optional embodiment, the roaming point information includes a preview effect control, and further includes:
and responding to the touch operation acted on the preview effect control, determining target roaming point information corresponding to the touch operation, and playing a target house source video clip corresponding to the target roaming point information.
In an alternative embodiment, comprising:
responding to touch operation acting on the flow control, displaying a preview interface in the graphical user interface, and displaying an initial viewing path corresponding to the sequence information in the preview interface;
and sequentially playing the target house source video clips corresponding to the video content controls in the preview interface according to the sequence information.
In an alternative embodiment, further comprising:
and responding to the end of the dragging operation acted on the video content control, and adjusting the sequence information of the video content control in the initial watching path to obtain a target video watching path.
In an alternative embodiment, further comprising:
responding to touch operation acting on any position in the preview interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and responding to a first input operation acted on the label editing window, acquiring first label information corresponding to the first input operation, and generating an initial label identifier corresponding to the initial display position.
In an optional embodiment, the label editing window further includes a label name control and a label content control, and the label content control further includes an add picture control and an add link control, including:
responding to a first input operation aiming at the label name control to acquire label name information;
and/or responding to a second input operation aiming at the label content control to acquire label content information;
and/or responding to a third input operation aiming at the added picture control to acquire label picture information;
and/or responding to a fourth input operation aiming at the added link control to acquire label link information;
and generating an initial label identifier corresponding to the initial position by adopting at least one of the label name information, the label content information, the label picture information and the label link information.
In an alternative embodiment, further comprising:
responding to touch operation acting on the initial label identification, and displaying a label editing window corresponding to the initial label identification;
and responding to a second input operation aiming at the label editing window, acquiring second label information corresponding to the second input operation, and generating a target label identifier corresponding to the initial display position according to the second label information.
In an alternative embodiment, the label editing window further includes a label deletion control, further including:
responding to touch operation acting on the initial label identification, and displaying a label editing window corresponding to the initial label identification;
and responding to the touch operation acted on the label deleting control, and deleting the initial label identification corresponding to the initial display position in the preview interface.
In an alternative embodiment, comprising:
and responding to touch operation acting on any position on the graphical user interface, pausing the playing of a current target room source video clip corresponding to a current video content control, acquiring an initial display position of the touch operation on the graphical user interface, a target video frame and a target time point corresponding to the initial display position in the current room source video clip, and displaying a label editing window aiming at the initial display position in the preview interface.
In an optional embodiment, the tab editing window includes a tab publishing control therein, further including:
and responding to touch operation acting on the label issuing control, and generating an initial label identifier corresponding to the initial display position on the target video frame and the target time point acquired in the preview interface according to the first label information.
In an alternative embodiment, further comprising:
and responding to the end of the dragging operation acted on the initial label identification, determining the target display position corresponding to the graphical user interface of the dragging operation, and controlling the initial label identification to move from the initial display position to the target display position.
In an alternative embodiment, further comprising:
responding to the dragging operation acting on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and if the target display position meets a preset condition, displaying a deletion area in the graphical user interface;
and responding to drag of the initial label identification to the deletion area, and deleting the initial label identification in the preview interface.
In an alternative embodiment, comprising:
and responding to the dragging operation acted on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and displaying a deleted area in the graphical user interface if the distance between the target display position and the deleted area is smaller than a preset distance threshold.
In an alternative embodiment, comprising:
and responding to the dragging operation acted on the initial label identification, acquiring the dragging operation time of the initial label identification in the graphical user interface corresponding to the dragging operation, and if the dragging operation time is greater than a preset time threshold, displaying a deletion area in the graphical user interface.
In an optional embodiment, the preview interface further includes an edit control switch, further including:
under the condition that the editing control switch is in an open state, responding to touch operation acting on any position in the preview interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and under the condition that the editing control switch is in a closed state, the touch operation for generating the label editing window is not responded.
In an alternative embodiment, the tour editing interface further includes a preview control, further including:
responding to touch operation acting on a preview control, displaying a preview interface, acquiring an initial watching path corresponding to the roaming route, and playing a target video clip corresponding to the initial watching path;
and when the playing progress reaches the time point, displaying the initial tag identification corresponding to the time point and the associated information interface corresponding to the tag identification.
In an optional embodiment, the house source video interface further includes a new control, including:
responding to touch operation acting on the newly-built control, displaying an information acquisition interface in the graphical user interface, and displaying a current function space in the information acquisition interface;
and responding to the acquisition operation aiming at the information acquisition interface, acquiring the house source video clip corresponding to the current functional space, and displaying the house source video clip on the house source video interface.
In an optional embodiment, the house source video interface further includes a new control, and further includes:
responding to touch operation acting on the newly-built control, and displaying a local multimedia interface of the electronic terminal in the graphical user interface, wherein the local multimedia interface at least comprises a local video clip;
and selecting the house source video clip corresponding to the functional space, and displaying the house source video clip on a house source video display interface.
The communication bus mentioned in the above terminal may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the terminal and other equipment.
The memory may include a Random Access Memory (RAM), or may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present invention, as shown in fig. 12, there is further provided a computer-readable storage medium 1201, which stores instructions that, when executed on a computer, cause the computer to execute a method for generating a house source looking route described in the above-mentioned embodiment.
In yet another embodiment, a computer program product is provided, which includes instructions, when run on a computer, cause the computer to execute a method for generating a house source looking route as described in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk (ssd)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The house source line-with-watch route generation method and the house source line-with-watch route generation device provided by the invention are described in detail, specific examples are applied in the text to explain the principle and the implementation mode of the invention, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be changes in the specific embodiments and the application scope, and in view of the above, the content of the present specification should not be construed as a limitation to the present invention.
Claims (24)
1. A generation method of a house source with a watching route is characterized in that the content displayed through a graphical user interface of an electronic terminal at least comprises a house source interface corresponding to a target house source and a route design control; wherein the target source comprises at least one functional space, the method comprising:
responding to touch operation acted on the route design control, and displaying a house source video interface corresponding to the target house source, wherein the house source video interface comprises a flow control and at least one house source video clip corresponding to the functional space;
responding to the selection operation aiming at the room source video clips, selecting at least one target room source video clip, and acquiring sequence information corresponding to the at least one target room source video clip;
responding to touch operation acting on the process control, and displaying a preview interface in the graphical user interface, wherein the preview interface comprises an initial watching path corresponding to the sequence information;
generating a house source watching route matched with the target house source according to the initial watching path;
the method for generating the house source watching path matched with the target house source according to the initial watching path comprises the following steps of:
responding to touch operation acted on the roaming route generation control, and displaying a roaming route editing interface, wherein the roaming route editing interface at least comprises roaming point information corresponding to the video content control, a house type diagram corresponding to the target house source, and a roaming route which is displayed in the house type diagram and corresponds to the initial viewing path;
and generating a house source watching route matched with the target house source according to the roaming route and the roaming point information.
2. The method of claim 1, wherein the waypoint information comprises a functional space name and a tag name corresponding to the video content control.
3. The method of claim 1, wherein the roaming route editing interface comprises a publishing sharing control, and the generating of the house source watching route matching the target house source according to the roaming route and the roaming point information comprises:
responding to the touch operation acting on the issuing and sharing control, and generating a house source watching route aiming at the target house source by adopting the roaming route and the roaming point information.
4. The method of claim 1, wherein the roaming route includes a positioning identifier corresponding to roaming point information, the method further comprising:
and responding to the touch operation aiming at the positioning identifier, determining target roaming point information corresponding to the touch operation, and switching the target roaming point information from a first display style to a second display style.
5. The method of claim 1, wherein the waypoint information comprises a preview effects control, the method further comprising:
and responding to the touch operation acted on the preview effect control, determining target roaming point information corresponding to the touch operation, and playing a target house source video clip corresponding to the target roaming point information.
6. The method of claim 1, wherein the presenting a preview interface in the graphical user interface in response to the touch operation acting on the flow control comprises:
responding to touch operation acting on the flow control, displaying a preview interface in the graphical user interface, and displaying an initial viewing path corresponding to the sequence information in the preview interface;
and sequentially playing the target house source video clips corresponding to the video content controls in the preview interface according to the sequence information.
7. The method of claim 6, further comprising:
and responding to the end of the dragging operation acted on the video content control, and adjusting the sequence information of the video content control in the initial watching path to obtain a target video watching path.
8. The method of claim 1, further comprising:
responding to touch operation acting on any position in the preview interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and responding to a first input operation acted on the label editing window, acquiring first label information corresponding to the first input operation, and generating an initial label identifier corresponding to the initial display position.
9. The method of claim 8, wherein the label editing window further includes a label name control, a label content control, and an add picture control and an add link control, and wherein the obtaining first label information corresponding to a first input operation in response to the first input operation acting on the label editing window and generating an initial label identifier corresponding to the initial display position includes:
responding to a first input operation aiming at the label name control to acquire label name information;
and/or responding to a second input operation aiming at the label content control to acquire label content information;
and/or responding to a third input operation aiming at the added picture control to acquire label picture information;
and/or responding to a fourth input operation aiming at the added link control to acquire label link information;
and generating an initial label identification corresponding to the initial display position by adopting at least one of the label name information, the label content information, the label picture information and the label link information.
10. The method of claim 8, further comprising:
responding to touch operation acting on the initial label identification, and displaying a label editing window corresponding to the initial label identification;
and responding to a second input operation aiming at the label editing window, acquiring second label information corresponding to the second input operation, and generating a target label identifier corresponding to the initial display position according to the second label information.
11. The method of claim 10, wherein the label editing window further comprises a label deletion control, the method further comprising:
responding to touch operation acting on the initial label identification, and displaying a label editing window corresponding to the initial label identification;
and responding to touch operation acted on the label deleting control, and deleting the initial label identification corresponding to the initial display position in the preview interface.
12. The method according to claim 8, wherein the responding to the touch operation applied to any position on the graphical user interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window for the initial display position in the preview interface comprises:
and responding to touch operation acting on any position on the graphical user interface, pausing the playing of a current target room source video clip corresponding to a current video content control, acquiring an initial display position of the touch operation on the graphical user interface, a target video frame and a target time point corresponding to the initial display position in the current room source video clip, and displaying a label editing window aiming at the initial display position in the preview interface.
13. The method of claim 12, wherein the label editing window includes a label publishing control therein, the method further comprising:
and responding to touch operation acting on the label issuing control, and generating an initial label identifier corresponding to the initial display position on the target video frame and the target time point acquired in the preview interface according to the first label information.
14. The method of claim 13, further comprising:
and responding to the end of the dragging operation acted on the initial label identification, determining the target display position corresponding to the graphical user interface of the dragging operation, and controlling the initial label identification to move from the initial display position to the target display position.
15. The method of claim 14, further comprising:
responding to the dragging operation acting on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and if the target display position meets a preset condition, displaying a deletion area in the graphical user interface;
and responding to the condition that the initial label identification is dragged to the deleting area, and deleting the initial label identification in the preview interface.
16. The method according to claim 15, wherein the obtaining a target display position of the initial tag identifier in the gui in response to the dragging operation applied to the initial tag identifier, and displaying a deletion area in the gui if the target display position satisfies a preset condition includes:
and responding to the dragging operation acted on the initial label identification, acquiring a target display position of the initial label identification in the graphical user interface, and if the distance between the target display position and the deleting area is smaller than a preset distance threshold value, displaying the deleting area in the graphical user interface.
17. The method according to claim 15, wherein the obtaining a target display position of the initial tag identifier in the gui in response to the dragging operation applied to the initial tag identifier, and displaying a deletion area in the gui if the target display position satisfies a preset condition includes:
and responding to the dragging operation acted on the initial label identification, acquiring the dragging operation time of the initial label identification in the graphical user interface corresponding to the dragging operation, and if the dragging operation time is greater than a preset time threshold, displaying a deletion area in the graphical user interface.
18. The method of claim 8, wherein the preview interface further comprises an edit control switch, the method further comprising:
under the condition that the editing control switch is in an open state, responding to touch operation acting on any position in the preview interface, acquiring an initial display position of the touch operation on the graphical user interface, and displaying a label editing window aiming at the initial display position in the preview interface;
and under the condition that the editing control switch is in a closed state, the touch operation for generating the label editing window is not responded.
19. The method of claim 12, wherein the tour editing interface further comprises a preview control, the method further comprising:
responding to touch operation acting on a preview control, displaying a preview interface, acquiring an initial viewing path corresponding to the roaming route, and playing a target video clip corresponding to the initial viewing path;
and when the playing progress reaches the target time point, displaying an initial tag identification corresponding to the target time point and an associated information interface corresponding to the tag identification.
20. The method of claim 1, wherein the room source video interface further comprises a new control, the method further comprising:
responding to touch operation acting on the newly-built control, displaying an information acquisition interface in the graphical user interface, and displaying a current function space in the information acquisition interface;
and responding to the acquisition operation aiming at the information acquisition interface, acquiring the house source video clip corresponding to the current functional space, and displaying the house source video clip on the house source video interface.
21. The method of claim 1, wherein the room source video interface further comprises a new control, the method further comprising:
responding to touch operation acting on the newly-built control, and displaying a local multimedia interface of the electronic terminal in the graphical user interface, wherein the local multimedia interface at least comprises a local video clip;
and selecting the house source video clip corresponding to the functional space, and displaying the house source video clip on a house source video display interface.
22. A generation device of a house source with a watching route is characterized in that contents displayed through a graphical user interface of an electronic terminal at least comprise a house source interface corresponding to a target house source and a route design control; wherein the target source comprises at least one functional space, the apparatus comprising:
the house source video display module is used for responding to touch operation acting on the route design control and displaying a house source video interface corresponding to the target house source, and the house source video interface comprises a flow control and at least one house source video clip corresponding to the functional space;
the house source video selection module is used for responding to the selection operation aiming at the house source video clip, selecting at least one target house source video clip and acquiring the sequence information corresponding to the at least one target house source video clip;
the preview interface display module is used for responding to touch operation acting on the flow control and displaying a preview interface in the graphical user interface, wherein the preview interface comprises an initial viewing path corresponding to the sequence information;
the roaming route generating module is used for generating a house source watching route matched with the target house source according to the initial watching route;
wherein the initial path with watching comprises a video content control corresponding to the target house source video clip, the preview interface comprises a roaming route generation control, and the roaming route generation module comprises:
a roaming route editing interface display sub-module, configured to display a roaming route editing interface in response to a touch operation performed on the roaming route generation control, where the roaming route editing interface at least includes roaming point information corresponding to the video content control, a house type diagram corresponding to the target house source, and a roaming route displayed in the house type diagram and corresponding to the initial path to be watched;
and the roaming route generation submodule is used for generating a house source watching route matched with the target house source according to the roaming route and the roaming point information.
23. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing a program stored on the memory, implementing the method of any of claims 1-21.
24. One or more computer-readable media having instructions stored thereon that, when executed by one or more processors, cause the processors to perform the method recited in any of claims 1-21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111111818.7A CN113963133B (en) | 2021-09-18 | 2021-09-18 | Method and device for generating house source watching route, electronic equipment and readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111111818.7A CN113963133B (en) | 2021-09-18 | 2021-09-18 | Method and device for generating house source watching route, electronic equipment and readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113963133A CN113963133A (en) | 2022-01-21 |
CN113963133B true CN113963133B (en) | 2022-07-08 |
Family
ID=79462189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111111818.7A Active CN113963133B (en) | 2021-09-18 | 2021-09-18 | Method and device for generating house source watching route, electronic equipment and readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113963133B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115129187B (en) * | 2022-05-31 | 2024-02-06 | 瑞庭网络技术(上海)有限公司 | House source area viewing method and device, electronic equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107770036A (en) * | 2016-08-16 | 2018-03-06 | 北京嘀嘀无限科技发展有限公司 | Information of real estate retrieval, data receiver are with sending processing method, server and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108898516B (en) * | 2018-05-30 | 2020-06-16 | 贝壳找房(北京)科技有限公司 | Method, server and terminal for entering between functions in virtual three-dimensional room speaking mode |
CN111709856A (en) * | 2020-06-17 | 2020-09-25 | 北京字节跳动网络技术有限公司 | House source information processing method and device and electronic equipment |
CN112068750A (en) * | 2020-08-20 | 2020-12-11 | 北京五八信息技术有限公司 | House resource processing method and device |
CN112596694B (en) * | 2020-12-23 | 2022-02-11 | 北京城市网邻信息技术有限公司 | Method and device for processing house source information |
CN112651801B (en) * | 2020-12-23 | 2022-04-15 | 北京城市网邻信息技术有限公司 | Method and device for displaying house source information |
-
2021
- 2021-09-18 CN CN202111111818.7A patent/CN113963133B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107770036A (en) * | 2016-08-16 | 2018-03-06 | 北京嘀嘀无限科技发展有限公司 | Information of real estate retrieval, data receiver are with sending processing method, server and device |
Also Published As
Publication number | Publication date |
---|---|
CN113963133A (en) | 2022-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11233972B2 (en) | Asynchronous online viewing party | |
US11017813B2 (en) | Storyline experience | |
TWI533686B (en) | Method for virtual channel management, network-based multimedia reproduction system with virtual channel, and computer readable storage medium | |
US8701008B2 (en) | Systems and methods for sharing multimedia editing projects | |
US20100153520A1 (en) | Methods, systems, and media for creating, producing, and distributing video templates and video clips | |
CA2943975C (en) | Method for associating media files with additional content | |
CN105794213A (en) | Collaborative video editing in cloud environment | |
US9369768B1 (en) | System and method for media presentation with dynamic secondary content | |
CN111343074B (en) | Video processing method, device and equipment and storage medium | |
US10521481B2 (en) | Video-production system with social-media features | |
US10186300B2 (en) | Method for intuitively reproducing video contents through data structuring and the apparatus thereof | |
KR20120116905A (en) | Method for presenting user-defined menu of digital content choices, organized as ring of icons surrounding preview pane | |
EP3326377A1 (en) | Video-production system with social-media features | |
US10009398B2 (en) | System for providing event-related contents to users attending an event and having respective user terminals | |
KR101924978B1 (en) | A providing system for timeline-based social network service | |
EP3322192A1 (en) | Method for intuitive video content reproduction through data structuring and user interface device therefor | |
US10404770B2 (en) | Video-production system with social-media features | |
US20160063087A1 (en) | Method and system for providing location scouting information | |
CN113963133B (en) | Method and device for generating house source watching route, electronic equipment and readable medium | |
KR101490506B1 (en) | Method and apparatus for editing moving picture contents | |
KR20210124906A (en) | System And Method For Obtaining Image Information, And Method For Displaying Image Acquisition Information | |
WO2008087742A1 (en) | Moving picture reproducing system, information terminal device and information display method | |
US20130262568A1 (en) | Content management system for publishing guides | |
JP2009260693A (en) | Metadata editing system, metadata editing program and metadata editing method | |
WO2016004478A1 (en) | Method and platform for handling audio content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |