CN114371898B - Information display method, equipment, device and storage medium - Google Patents

Information display method, equipment, device and storage medium Download PDF

Info

Publication number
CN114371898B
CN114371898B CN202111682545.1A CN202111682545A CN114371898B CN 114371898 B CN114371898 B CN 114371898B CN 202111682545 A CN202111682545 A CN 202111682545A CN 114371898 B CN114371898 B CN 114371898B
Authority
CN
China
Prior art keywords
vehicle
display state
live
view
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111682545.1A
Other languages
Chinese (zh)
Other versions
CN114371898A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202111682545.1A priority Critical patent/CN114371898B/en
Publication of CN114371898A publication Critical patent/CN114371898A/en
Application granted granted Critical
Publication of CN114371898B publication Critical patent/CN114371898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information display method, equipment, a device and a storage medium. In the embodiment of the application, the live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface, the vehicle model corresponding to the target vehicle is displayed on the second interactive interface, the live-action vehicle and the vehicle model have a linkage relation, one of the live-action vehicle or the vehicle model is subjected to interactive operation, and the other one of the live-action vehicle or the vehicle model can be subjected to linkage response along with the interactive operation according to the linkage relation, so that a user can conveniently and quickly control the live-action vehicle or the vehicle model, and the experience of the user is improved.

Description

Information display method, equipment, device and storage medium
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to an information display method, device, and apparatus, and a storage medium.
Background
In a VR/AR car-seeing scene, live-action vehicles can be shown. The user can browse the appearance or interior details of the live-action vehicle on line in a dragging or sliding mode without arriving at the scene. However, in the whole browsing process, the vehicle information cannot be displayed conveniently and quickly, and therefore how to display the vehicle information conveniently and quickly is an urgent problem to be solved.
Disclosure of Invention
Various aspects of the present application provide an information display method, device, apparatus, and storage medium, which are used to facilitate a user to quickly and conveniently control a live-action vehicle or a vehicle model, and improve the experience of the user.
The embodiment of the application provides an information display method, a first graphical user interface is provided through a first electronic terminal, the content displayed by the first graphical user interface comprises a first interactive interface, a live-action vehicle corresponding to a target vehicle is displayed on the first interactive interface, and the method comprises the following steps: displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling a target vehicle; and responding to the first interactive operation acted on the vehicle model, switching the vehicle model from the current display state to the first display state, and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relation, wherein the second display state is matched with the first display state.
The embodiment of the present application further provides an information display device, where the information display device provides a first graphical user interface, the content displayed by the first graphical user interface includes a first interactive interface, a live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface, and the information display device includes: the display module, the first switching module and the second switching module; the display module is used for displaying a vehicle model corresponding to the live-action vehicle on the second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling a target vehicle; the first switching module is used for responding to first interactive operation acted on the vehicle model and switching the vehicle model from the current display state to the first display state; and the second switching module is used for synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, and the second display state is matched with the first display state.
An embodiment of the present application further provides an electronic terminal, where the electronic terminal includes: a memory, a processor, and a display; a memory for storing a computer program; a processor coupled with the memory for executing the computer program for: providing a first graphical user interface through a display, wherein the content displayed by the first graphical user interface comprises a first interactive interface, and a live-action vehicle corresponding to a target vehicle is displayed on the first interactive interface; displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling a target vehicle; and responding to a first interactive operation acted on the vehicle model, switching the vehicle model from the current display state to a first display state, and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, wherein the second display state is matched with the first display state.
The embodiments of the present application further provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the information presentation method provided by the embodiments of the present application.
In the embodiment of the application, the live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface, the vehicle model corresponding to the target vehicle is displayed on the second interactive interface, a linkage relationship exists between the live-action vehicle and the vehicle model, interaction operation is carried out on one of the live-action vehicle or the vehicle model, linkage response can be carried out along with the interaction operation according to the other linkage relationship, a user can conveniently and quickly control the live-action vehicle or the vehicle model, and the experience of the user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flowchart of an information displaying method according to an exemplary embodiment of the present application;
FIG. 2a is a schematic diagram of a first interactive interface and a second interactive interface provided in an exemplary embodiment of the present application;
FIG. 2b is a schematic diagram of another first interactive interface and a second interactive interface provided by an exemplary embodiment of the present application;
FIG. 3a is a schematic diagram illustrating a target camera point location before being updated according to an embodiment of the present disclosure;
FIG. 3b is a diagram illustrating a target camera point location after being updated according to an embodiment of the present disclosure;
fig. 4a is a schematic diagram illustrating a shooting perspective of a target camera point before updating according to an embodiment of the present application;
fig. 4b is a schematic diagram of a target camera point after a shooting angle of view is updated according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an information display device according to an exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic terminal according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The information display method in the embodiment of the application can be operated on an electronic terminal or a server. The electronic terminal may be a local electronic terminal. When the vehicle information display method is operated as a server, cloud display can be performed.
In an optional embodiment, the cloud presentation refers to an information presentation manner based on cloud computing. In the cloud display operation mode, an operation main body of an information processing program and a display main body of a vehicle information picture are separated, data storage and operation corresponding to the vehicle information display method can be completed on a cloud display server, and a cloud display client is used for receiving and sending data and displaying the picture, for example, the cloud display client can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a handheld computer and the like; but the electronic terminal for processing the information data is a cloud display server at the cloud end. When information browsing is carried out, a user operates the cloud display client to send an operation instruction to the cloud display server, the cloud display server displays related pictures according to the operation instruction, data are coded and compressed and are returned to the cloud display client through a network, and finally the cloud display client decodes the data and outputs the corresponding pictures.
In another alternative embodiment, the electronic terminal may be a local electronic terminal. The local electronic terminal stores an application program and is used for presenting an application interface. The local electronic terminal is used for interacting with a user through a graphical user interface, that is, an installation application is downloaded and run through the electronic device conventionally. The manner in which the local electronic terminal provides the graphical user interface to the user may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the user by holographic projection. For example, the local electronic terminal may include a display screen for presenting a graphical user interface including an application screen and a processor for running the application program, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
When the electronic terminal is a local electronic terminal, the electronic terminal may be an electronic terminal such as a desktop computer, a notebook computer, a tablet computer, a mobile terminal, and a Virtual Reality (VR) device. The user can roam in a designated area through a vehicle source picture displayed by a display device of the VR headset, so that the user can actually roam in a virtual space, and meanwhile can interact with a virtual vehicle through the VR control device.
The terminal can run application programs, such as life application programs, audio application programs, game application programs and the like. The life application programs can be further divided according to different types, such as car renting and selling application programs, house renting and selling application programs, home service application programs, entertainment application programs and the like. The embodiment of the application is exemplified by running a car renting and selling application program on a mobile terminal, and it is understood that the application is not limited thereto.
In the process of browsing the appearance or the interior of the live-action vehicle on line by a user, one method is to simultaneously display the top plan views of the live-action vehicle and the vehicle, acquire the current view angle of the live-action vehicle through the top plan views and determine the spatial position relationship of the live-action vehicle. However, the top plan view of the vehicle is two-dimensional, the live-action vehicle is three-dimensional, and the top plan view and the live-action vehicle correspond to each other, and therefore a certain cognitive threshold is required, and the user experience is poor. Based on this, in the embodiment of the application, the live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface, the vehicle model corresponding to the target vehicle is displayed on the second interactive interface, a linkage relationship exists between the live-action vehicle and the vehicle model, an interactive operation is executed for one of the live-action vehicle or the vehicle model, and a linkage response can be carried out along with the interactive operation according to the other linkage relationship, so that a user can conveniently and quickly check the live-action vehicle and the vehicle model, and the experience of the user is improved.
Fig. 1 is a schematic flowchart of an information displaying method according to an exemplary embodiment of the present application. In the method, a first graphical user interface is provided through a first electronic terminal, content displayed by the first graphical user interface includes a first interactive interface, and a live-action vehicle corresponding to a target vehicle is displayed on the first interactive interface, as shown in fig. 1, the method includes:
101. displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling a target vehicle;
102. and responding to the first interactive operation acted on the vehicle model, switching the vehicle model from the current display state to the first display state, and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relation, wherein the second display state is matched with the first display state.
In the present embodiment, the target vehicle is a vehicle that needs to be presented to the user; the live-action vehicle is obtained by carrying out image acquisition on a target vehicle and synthesizing the acquired images; the vehicle model is a vehicle model obtained by modeling a target vehicle, for example, a 3D vehicle model obtained by three-dimensionally modeling the target vehicle, or an animated vehicle model obtained by acquiring images of a real vehicle and performing animation synthesis on the acquired images.
In this embodiment, there is a linkage relationship between the live-action vehicle and the vehicle model, and the linkage relationship means that for the live-action vehicle and the vehicle model, one of them moves or changes, and the other moves or changes accordingly. In the embodiment, the interactive operation may be initiated for the vehicle model on the second interactive interface, and for convenience of distinction and description, the interactive operation initiated for the vehicle model is referred to as a first interactive operation. Wherein, the first interactive operation may include but is not limited to: local clicking operation, zooming operation, rotating operation, opening a vehicle door, opening a skylight or opening a trunk and the like. The first interactive operation may be initiated by a user or a service person, which is not limited herein. The first electronic terminal can respond to the first interactive operation acted on the vehicle model, switch the vehicle model from the current display state to the first display state, and synchronously switch the live-action vehicle from the current display state to the second display state according to the linkage relation. The first display state is a display state resulting from performing a first interactive operation on the vehicle model in the current display state. The second display state is a display state resulting from the first interactive operation being performed with respect to the live-action vehicle in the current display state. The second display state corresponds to the first display state and the first interactive operation, or the second display state is adaptive to the first display state. The following examples are given.
For example, the current display state of the vehicle model is a door closed state, the current display state of the live-action vehicle is also a door closed state, and if the first interactive operation initiated on the vehicle model is to open the door, the vehicle model is switched from the door closed state to the door open state, and accordingly, the live-action model is synchronously switched from the door closed state to the door open state according to the linkage relationship. For another example, the current display state of the vehicle model is a global display state, the current display state of the live-action vehicle is also a global display state, the first interactive operation is an amplification operation on the vehicle model, the vehicle model is switched from the current global display state to a local display state of the vehicle model, for example, the local display state is a sunroof display state of the vehicle model, the amplification operation is also performed on the live-action vehicle according to the linkage relationship, the live-action vehicle is switched from the current global display state to an amplified global display state, the amplified global display state of the live-action vehicle corresponds to the sunroof display state of the vehicle model, or the amplified global display state of the live-action vehicle is adapted to the display state of the vehicle model.
In this embodiment, the interactive operation is initiated for the vehicle model, the vehicle model may be switched from the current display state to a first display state corresponding to the interactive operation, and according to the linkage relationship, the live-action vehicle may synchronously switch the current display state to a second display state corresponding to the interactive operation. In addition, interactive operation is initiated aiming at the live-action vehicle, the live-action vehicle can switch the display state, and the vehicle model can also synchronously switch the display state according to the linkage relation. Specifically, in some optional embodiments of the present application, the interactive operation may be initiated with respect to a live-action vehicle, and for ease of distinction and description, the interactive operation initiated with respect to the live-action vehicle is referred to as a second interactive operation. The first electronic terminal can respond to second interactive operation acted on the live-action vehicle, switch the live-action vehicle from the current display state to a third display state, and synchronously switch the vehicle model from the current display state to a fourth display state according to the linkage relation, wherein the third display state and the fourth display state both correspond to the second interactive operation, and the third display state is adaptive to the fourth display state. The second interactive operation is the same as or similar to the first interactive operation, and details of the second interactive operation can be found in the foregoing embodiments, and are not repeated herein.
In the embodiment of the application, the live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface, the vehicle model corresponding to the target vehicle is displayed on the second interactive interface, the live-action vehicle and the vehicle model have a linkage relation, the interactive operation is executed aiming at one of the live-action vehicle or the vehicle model, and the other one of the live-action vehicle or the vehicle model can carry out linkage response along with the interactive operation according to the linkage relation, so that a user can conveniently and quickly control the live-action vehicle or the vehicle model, and the experience of the user is improved.
In this embodiment, the first interactive interface and the second interactive interface may be displayed on the same electronic terminal, or may be displayed on different electronic terminals. The following description will be made separately.
In an alternative embodiment, the first interactive interface and the second interactive interface are displayed on the same electronic terminal. The second interactive interface and the first interactive interface are the same interactive interface on the first electronic terminal, and the vehicle model and the live-action vehicle are respectively positioned in an edge area and a center area of the same interactive interface (such as the first interactive interface). The central area and the edge area on the first interactive interface are relative, and the positions of the central area and the edge area are different according to different shapes of the interactive interfaces. For example, if the interactive interface is a circle, the center region of the interactive interface refers to a small circle whose center is the same as the circle and whose radius is smaller than the circle, and the edge region of the interactive interface refers to an annular region excluding the small circle region corresponding to the center region in the circle region, although the center region may be of other shapes, and the edge region may be of other shapes, which is not limited to this. For another example, if the interactive interface is rectangular, the central area of the interactive interface is the central area of the rectangle, and the edge areas of the interactive interface are the areas at four corners of the rectangle, for example, a lower left corner area, an upper left corner area, a lower right corner area, and an upper right corner area. In fig. 2a, the vehicle model and the live-action vehicle are shown by taking the example that the interactive interface is a rectangle and the edge area of the interactive interface is the upper right corner area of the interactive interface, but the invention is not limited thereto.
In another alternative embodiment, the first interactive interface and the second interactive interface are displayed on the same electronic terminal. The second interactive interface and the first interactive interface are different interactive interfaces on the first electronic terminal, and the position relationship between the first interactive interface and the second interactive interface is not limited. For example, the first interactive interface and the second interactive interface may be side-by-side, e.g., top-to-bottom or side-to-side; the first interactive interface and the second interactive interface may also be in a diagonal relationship, for example, the first interactive interface is located at the upper left or the lower right of the second interactive interface, etc.; the first interactive interface and the second interactive interface may also overlap, e.g., the first interactive interface is located on the second interactive interface, etc. As shown in fig. 2a, the second interactive interface is located at the upper right corner of the first interactive interface for illustration, but not limited thereto. The interface sizes of the first interactive interface and the second interactive interface are not limited, for example, the interface size of the first interactive interface is larger than the interface size of the second interactive interface, or the interface size of the first interactive interface is smaller than the interface size of the second interactive interface. In fig. 2a and 2b, the second interactive interface is illustrated as being smaller than the first interactive interface.
In yet another alternative embodiment, the first interactive interface and the second interactive interface are displayed on different electronic terminals. The first interactive interface is an interactive interface on the first electronic terminal, the second interactive interface is an interactive interface on the second electronic terminal, and the second electronic terminal is in communication connection with the first electronic terminal. As shown in fig. 2b, the first electronic terminal is a display screen, and the second electronic terminal is a smart phone, but not limited thereto.
Whether the first interactive interface and the second interactive interface are displayed on one electronic terminal or two different electronic terminals. In some optional embodiments of the present application, the vehicle model further shows contour information of the target vehicle, and may not show detail information of the target vehicle, for example, the vehicle model may not show paint information of the surface of the target vehicle or interior information of the target vehicle, and the detail information of the target vehicle may be shown by the live-action vehicle. Based on this, the interface size of the second interactive interface may be smaller than the interface size of the first interactive interface. The vehicle model is displayed on the first interactive interface with the smaller interface size, so that the interactive operation of a user is facilitated, and the live-action vehicle is displayed on the second interactive interface with the larger interface size, so that the detail information of the target vehicle can be displayed to the user more clearly and intuitively. Further, under the condition that the first interactive interface and the second interactive interface are displayed on the two electronic terminals, the screen size of the second electronic terminal is smaller than that of the electronic screen of the first electronic terminal. For example, the first electronic terminal is implemented as an LED large screen, and the second electronic terminal is implemented as a smart phone.
In this embodiment, the first interactive operation and the second interactive operation are not limited. The implementation manners of the first interactive operation and the second interactive operation are the same or similar, and the implementation manner of the first interactive operation is exemplified below, and for the implementation manner of the second interactive operation, reference may be made to the implementation manner of the first interactive operation, and details are not described herein again.
Example A1:the first interactive operation is implemented as a local clicking operation for viewing a local area of the vehicle. The vehicle local area may be a local area of the vehicle exterior or a local area of the vehicle interior. In the present embodiment, the embodiment in which the local pointing operation is initiated for the vehicle model is not limited. For example, a local region of the vehicle model may be clicked directly to initiate a local clicking operation on the vehicle model to expose the local region of the vehicle model. For another example, the second interactive interface includes a plurality of local controls, each local control points to a different local area on the vehicle model, and the local controls are triggered to initiate a local clicking operation for the vehicle model, so that the local areas of the vehicle model are displayed.
In any embodiment of initiating the local clicking operation, in this embodiment, after the local clicking operation is initiated for the vehicle model, a first local magnified view corresponding to the vehicle model and a second local magnified view corresponding to the live-action vehicle may be obtained according to the local area of the vehicle selected by the local clicking operation; wherein the first partial enlarged view is an enlarged view of a partial region of the vehicle on the vehicle model and represents a first display state; the second partial enlarged view is an enlarged view of the partial area of the vehicle on the live-action vehicle and represents a second display state; and switching the current view of the vehicle model into a first local enlarged view, and switching the current view of the live-action vehicle into a second local enlarged view according to the linkage relation. Wherein the current view of the vehicle model represents a current display state of the vehicle model; the current view of the live action vehicle represents the current display state of the live action vehicle.
Example A2:the first interaction operation is a zoom operation. The zooming operation can be triggered through the double-finger sliding operation, and based on the operation, the reference zooming proportion can be determined according to the zooming track of the zooming operation; based on the reference scaling, carrying out first scaling operation on the current view of the vehicle model, and synchronously carrying out second scaling operation on the current view of the live-action vehicle according to the linkage relation; wherein, the firstA view obtained by a zoom operation represents a first display state, and a view obtained by a second zoom operation represents a second display state. Or a zooming control is arranged on the second interactive interface, and zooming operation is realized based on the zooming control. In this embodiment, a reference scaling may be determined by triggering the scaling control, a first scaling operation may be performed on the current view of the vehicle model based on the reference scaling, and a second scaling operation may be performed on the current view of the live-action vehicle synchronously according to the linkage relationship.
In this embodiment, the actual scaling ratios of the first scaling operation and the second scaling operation may be the same or different. Alternatively, the actual scaling of the first and second scaling operations may be determined according to the reference scaling and the interface size, considering that the interface sizes of the first and second interactive interfaces may be different. For example, for the second interactive interface with a larger interface size, the actual scaling ratio thereof may be larger than the reference scaling ratio, and for the first interactive interface with a larger interface size, the actual scaling ratio thereof may be smaller than or equal to the reference scaling ratio.
Further optionally, under the condition that the actual scaling of the first zooming operation is different from that of the second zooming operation, the view obtained by the first zooming operation is a global view of the vehicle model, and the view obtained by the second zooming operation is a third local enlarged view of the live-action vehicle, so that a view area corresponding to a vehicle local area displayed by the third local enlarged view can be marked in the global view of the vehicle model, so as to view the position of the local area displayed by the live-action vehicle according to the global view of the vehicle model, so that a user obtains a current view angle of the live-action vehicle, and the spatial position relationship of the local area in the whole vehicle is clarified.
Example A3:the first interactive operation is a rotation operation. The embodiment of initiating the rotation operation is not limited, for example, the rotation operation on the vehicle model may be initiated by a single-finger drag operation, or a rotation control is included on the second interactive interface and is triggered to initiate the rotation operation on the vehicle model. In the vehicle modelAfter the rotation operation is initiated, the first rotation operation can be carried out on the current view of the vehicle model according to the rotation track of the rotation operation, and the second rotation operation can be synchronously carried out on the current view of the live-action vehicle according to the linkage relation; wherein the view obtained by the first rotation operation represents a first display state, and the view obtained by the second rotation operation represents a second display state. Alternatively, the relative rotation orientation information may be displayed in real time during the first rotation operation on the current view of the vehicle model. For example, the angle of rotation or the direction of rotation may be displayed by means of numbers or icons.
Example A4:when the interior decoration or the appearance of the target vehicle is displayed, the live-action vehicle or the vehicle model can be freely displayed at any angle through the first rotating operation, and besides, a visual angle switching control related to the vehicle model can be displayed on the first interactive interface; the view switching control is associated with a fixed view of the vehicle model, for example, the fixed view of the vehicle model may be a roof view, a front view, a rear view, or an oblique side view. The implementation form of the view switching control can be, but is not limited to: knobs, icon buttons, numeric buttons, or the like. And the user quickly checks the appearance of the target vehicle or the key angle of the interior trim through the visual angle switching control to quickly acquire key information.
Correspondingly, the first interactive operation is realized as a view switching operation acting on a view switching control, and when the view switching control is triggered, a first target view corresponding to the view switching operation can be acquired in response to the triggering operation on the view switching operation, wherein the first target view is one of the fixed views of the vehicle model, the current view of the vehicle model is switched to the view of the vehicle model under the first target view, and the current view of the live-action vehicle is synchronously switched to the view of the live-action vehicle under the first target view according to the linkage relationship. In addition, in this embodiment, the explanation content may be associated with a fixed viewing angle of the vehicle model, and the explanation content may be explained by Artificial Intelligence (AI), broker speech, human image or simulated image. The explanation content includes information of the vehicle model at a fixed view angle, for example, information of the vehicle model to be presented at the fixed view angle or angle information, and the like. For example, the explanation content corresponding to the front view angle of the vehicle model includes information such as vehicle lights, vehicle rain eyebrows or windshields, and the explanation content corresponding to the roof view angle of the vehicle model includes: skylight, roof rack, etc. Under the condition that the visual angle switching control is triggered, the explanation content adaptive to the first target visual angle can be acquired, the explanation content is broadcasted, and text information corresponding to the explanation content is displayed, wherein key information in the text information is highlighted, for example, highlighted. The key information may be a problem that users focus on in the explanation content, for example, vehicle paint repair information or vehicle component update information, so that users can quickly view related information at a fixed viewing angle of a vehicle model or a live-action vehicle.
Example A5:in this embodiment, in addition to acquiring the view corresponding to the fixed view angle of the vehicle model by triggering the view angle switching control, the corresponding relationship between the keywords and the fixed view angle of the vehicle model may be preset, and the fixed view angle of the vehicle model corresponding to the target keyword is displayed under the condition that the target keyword is identified, that is, different fixed view angles of the target vehicle may be displayed in cooperation with different keywords in the explanation content, so that the user may automatically follow the explanation content of the target vehicle to know the target vehicle without manual adjustment.
Specifically, in the process of explaining a target vehicle, voice explanation content aiming at the target vehicle is acquired in real time; performing semantic recognition on the voice explanation content, and if a target keyword is recognized, determining a second target view angle corresponding to the target keyword; and switching the current view of the vehicle model into the view of the vehicle model under the second target view angle, and synchronously switching the current view of the live-action vehicle into the view of the live-action vehicle under the second target view angle according to the linkage relation. In addition, in the embodiment, the explanation content adapted to the second target view angle may be acquired, and the explanation content is used for explaining the vehicle model at the second target view angle. In addition, key information in the narrative content can also be highlighted.
In an optional embodiment, the first interactive interface may display a partial image of a real-scene image corresponding to the real-scene vehicle, where the real-scene image may be a panoramic image (2D image) of the target vehicle attached to a ball, or generated as a sky box, so as to improve the real-scene viewing effect of the user, so that the corresponding image may be viewed at different roaming viewing angles, thereby obtaining a panoramic roaming experience. The second interactive interface displays a vehicle model corresponding to the target vehicle through the first visual angle picture, the first visual angle picture is a model diagram corresponding to the vehicle model, the model diagram of the target vehicle can be a picture corresponding to a space model generated after the target vehicle is subjected to space modeling, the space model generated through the space modeling corresponds to a virtual three-dimensional space, and in the virtual three-dimensional space, a user can continuously switch roaming visual angles through operations of clicking a roaming point, adjusting the roaming visual angles and the like, so that the experience of panoramic roaming is obtained.
In an optional embodiment, a first camera point location of the first display style and at least one second camera point location of the second display style are displayed in the model map, and the first camera point location is a target camera point location; the shooting visual angle corresponding to the target camera point location is matched with a first visual angle picture, and the first visual angle picture corresponds to a first roaming visual angle of the first roaming point. The model map comprises a first camera point location and at least one second camera point location, wherein the first camera point location is a target camera point location in a first display mode, a shooting visual angle corresponding to the target camera point location is matched with a first visual angle picture, the first visual angle picture corresponds to a first roaming visual angle of a first roaming point in the real scene map, namely, the shooting visual angle corresponding to the target camera point location and the first roaming visual angle are matched visual angles.
Next, a process of updating the display state in response to the first interactive operation will be described with respect to a case where the first interactive operation is a viewing angle switching operation. Wherein, responding to a first interactive operation acted on the vehicle model, switching the vehicle model from a current display state to a first display state, and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, and the method at least comprises at least one of the following steps:
the first scheme comprises the following steps: responding to a visual angle switching operation on the model graph, updating the target camera point position, and controlling the first interactive interface to display a second visual angle picture matched with the updated target camera point position according to the linkage relation, wherein the second visual angle picture is a local picture corresponding to a second roaming point in the real scene graph, the model graph after updating the target camera point position corresponds to a first display state of the vehicle model, and the second visual angle picture corresponds to a second display state of the real scene vehicle;
scheme II: responding to the visual angle switching operation of the target camera point, adjusting a shooting visual angle corresponding to the target camera point, and controlling the first interactive interface to display a third visual angle picture matched with the adjusted shooting visual angle according to the linkage relation, wherein the third visual angle picture is a local picture of a second roaming visual angle corresponding to the first roaming point in the panoramic picture, the model picture after the shooting visual angle corresponding to the target camera point is adjusted corresponds to a first display state of the vehicle model, and the third visual angle picture corresponds to a second display state of the live-action vehicle.
Scheme one is first introduced below:
and in response to the view switching operation on the model graph, updating the target camera point position from the first camera point position to one of the at least one second camera point position, namely, in response to the view switching operation, screening one camera point position from the at least one second camera point position and determining the camera point position as the target camera point position. For the first point of camera, it is no longer the target point of camera. And under the condition of updating the target camera point location, controlling the first interactive interface to display a second visual angle picture matched with the updated target camera point location according to the linkage relation so as to realize responding to visual angle switching operation of the model diagram, controlling the target camera point location to be updated and driving the first interactive interface to perform linkage response. And aiming at the second visual angle picture, the second visual angle picture is a local picture corresponding to the second roaming point in the live-action picture, specifically is a visual angle picture corresponding to the default roaming visual angle of the second roaming point, and the default roaming visual angle corresponding to the second roaming point is matched with the updated default shooting visual angle of the target camera point. The second roaming point is a roaming point which is different from the first roaming point in the live-action picture, and the roaming point in the live-action picture and the camera point in the model picture can form a mapping relation.
Updating the model map before the target camera point location to correspond to the current display state of the vehicle model, and updating the model map after the target camera point location to correspond to the first display state of the vehicle model; the first perspective view picture corresponds to a current display state of the live-action vehicle, and the second perspective view picture corresponds to a second display state of the live-action vehicle.
Wherein updating the target camera point location in response to a perspective switching operation on the two-dimensional vehicle model map comprises: in response to a first selection operation in the at least one second camera point location, updating the target camera point location from the first camera point location to the selected second camera point location; updating the first camera point location to the second display style, and updating the selected second camera point location to the first display style.
When the target camera point location is updated in response to the view switching operation, specifically, a first selection operation in at least one second camera point location is received, and the selected second camera point location is determined as the updated target camera point location in response to the first selection operation, where the view switching operation at this time is the first selection operation. Because the selected second camera point location is used as the updated target camera point location, the first camera point location is no longer the target camera point location. After the target camera point location is updated, the first camera point location is changed into a second display pattern, and the screened second camera point location is changed into the first display pattern.
For example, referring to fig. 3a, a first camera point location (a camera point location on the right side) in the model diagram is a first display style, that is, a dot mark is presented at the first camera point location and forms a sector area, and a plurality of second camera point locations are a second display style, that is, a dot mark is presented at the second camera point location. And under the condition that a first selection operation among the plurality of second camera points is received, determining the selected second camera point as a target camera point, controlling the selected second camera point to be switched into a first display mode, and changing the first camera point into a second display mode, wherein as shown in fig. 3b, the target camera point is updated to be the upper second camera point.
In the implementation process, the target camera point location is updated to drive the first interactive interface to realize linkage response, and the first visual angle picture of the first roaming visual angle corresponding to the first roaming point in the live-action picture is switched to the second visual angle picture of the second roaming point in the live-action picture, so that the roaming visual angle is controlled to be switched by updating the target camera point location.
Scheme two is introduced below:
and responding to the visual angle switching operation of the target camera point position, and adjusting the shooting visual angle corresponding to the target camera point position. In this scheme, the target camera point is not updated, but the shooting angle of view corresponding to the target camera point changes. Wherein the adjusting the shooting angle of view corresponding to the target camera point location in response to the angle of view switching operation on the target camera point location comprises: and responding to the rotation operation of the target camera point position, and adjusting the shooting visual angle corresponding to the target camera point position.
When receiving a rotation operation of a target camera point location by a user, responding to the rotation operation, and switching a shooting visual angle corresponding to the target camera point location from a first shooting visual angle to a second shooting visual angle, wherein the visual angle switching operation is the rotation operation of the target camera point location. For example, referring to fig. 4a, a first camera point location (a camera point location on the left side, and the first camera point location being a target camera point location) in the model diagram is a first display style, that is, a dot mark is presented at the first camera point location and forms a sector area, the sector area faces a target vehicle in the diagram, and a plurality of second camera point locations are a second display style, that is, a dot mark is presented at the second camera point location. When the rotation operation of the first camera position is received, the sector area is controlled to rotate, as shown in fig. 4b, so as to adjust the shooting angle corresponding to the first camera position.
When the shooting visual angle corresponding to the target camera point location is adjusted, the first interactive interface is controlled to display a third visual angle picture matched with the adjusted shooting visual angle according to the linkage relation, so that the purpose that the shooting visual angle of the target camera point location is controlled to be updated in response to visual angle switching operation of the model diagram is achieved, and the first interactive interface is driven to perform linkage response is achieved. The third view angle picture is a local picture of a second roaming view angle corresponding to the first roaming point in the live-action picture, and the second roaming view angle and the first roaming view angle are different roaming view angles corresponding to the first roaming point.
Updating a model map before the shooting view angle of the target camera point location to correspond to a current display state of the vehicle model, and updating a model map after the shooting view angle of the target camera point location to correspond to a first display state of the vehicle model; the first perspective view picture corresponds to a current display state of the live-action vehicle, and the third perspective view picture corresponds to a second display state of the live-action vehicle.
In the implementation process, the first interactive interface is driven to realize linkage response by updating the shooting visual angle of the target camera point, and the first visual angle picture which displays the first roaming visual angle corresponding to the first roaming point in the live-action picture is switched to the third visual angle picture which displays the second roaming visual angle corresponding to the first roaming point in the live-action picture, so that the roaming visual angle switching at the same roaming point is controlled by updating the shooting visual angle of the target camera point.
Next, a process of updating the display state is described with respect to a case where the second interactive operation is a viewing angle switching operation. The first visual angle picture displays at least one third roaming point, responds to a second interactive operation acted on the live-action vehicle, switches the live-action vehicle from a current display state to a third display state, and synchronously switches the vehicle model from the current display state to a fourth display state according to the linkage relationship, wherein the third roaming point at least comprises at least one of the following components:
the third scheme is as follows: responding to the visual angle switching operation of the live-action image, controlling the first interactive interface to display a fourth visual angle image corresponding to a third target roaming point in the live-action image, and updating the target camera point position according to the linkage relation, wherein the updated target camera point position is matched with the fourth visual angle image, the fourth visual angle image corresponds to a third display state of the live-action vehicle, and the model image after updating the target camera point position corresponds to a fourth display state of the vehicle model;
and the scheme is as follows: and responding to the visual angle switching operation of the first roaming point, controlling the first interactive interface to display a fifth visual angle picture of a third roaming visual angle corresponding to the first roaming point in the live-action picture, and adjusting a shooting visual angle corresponding to the target camera point according to the linkage relation, wherein the adjusted shooting visual angle is matched with the fifth visual angle picture, the fifth visual angle picture corresponds to a third display state of the live-action vehicle, and the model picture after the shooting visual angle corresponding to the target camera point is adjusted corresponds to a fourth display state of the vehicle model.
The third scheme is first introduced below:
in response to the view switching operation on the live-action view, the first interactive interface is controlled to display a fourth view picture corresponding to the third target roaming point in the live-action view, where the fourth view picture is a view picture corresponding to a default roaming view of the third target roaming point, and the third target roaming point in this embodiment and the second roaming point in the previous embodiment (the embodiment in which the first target image is the model view) may be the same roaming point or different roaming points.
Wherein the controlling the first interactive interface to display a fourth perspective picture corresponding to a third target roaming point in the live-action view in response to the perspective switching operation on the live-action view comprises: and responding to a second selection operation in the at least one third roaming point, and controlling the first interactive interface to display a fourth visual angle picture corresponding to the selected third target roaming point.
When the first interactive interface is controlled to display a fourth visual angle picture in response to the visual angle switching operation, a second selection operation in at least one third roaming point displayed in the first visual angle picture can be received, a third roaming point is selected in response to the second selection operation, the selected third roaming point is determined as a third target roaming point, the fourth visual angle picture corresponding to the selected third target roaming point is displayed on the first interactive interface, and the display updating of the first interactive interface is achieved. The first view picture can display a plurality of roaming points, each roaming point can correspond to a view picture different from the first view picture, so that the first view picture can be switched to other view pictures, and correspondingly, the other view pictures can display the roaming points so as to continue switching the view pictures.
And under the condition that the first interactive interface is controlled to display the fourth visual angle picture, updating the target camera point position according to the linkage relation, so that the updated target camera point position is matched with the fourth visual angle picture, the aim of responding to visual angle switching operation of the live-action picture is realized, the first interactive interface is controlled to update the display picture, and the target camera point position is driven to be updated based on the linkage response.
For the fourth view angle picture, the fourth view angle picture is a local picture corresponding to the third target roaming point in the live-action picture, specifically, a view angle picture corresponding to the default roaming view angle of the third target roaming point, and the default roaming view angle corresponding to the third target roaming point is matched with the default shooting view angle of the updated target camera point location.
Updating the model map before the target camera point location to correspond to a current display state of the vehicle model, and updating the model map after the target camera point location to correspond to a third display state of the vehicle model; the first perspective view corresponds to a current display state of the live-action vehicle, and the fourth perspective view corresponds to a fourth display state of the live-action vehicle. The updated target camera point location is determined among the at least one second camera point location, and for the first camera point location, the first camera point location is no longer taken as the target camera point location, and after the target camera point location is updated, the first camera point location is changed into the second display style, and the determined second camera point location is changed into the first display style.
In the implementation process, the first interactive interface is updated in response to the view switching operation, and the matched target camera point location is determined in the at least one second camera point location based on the linkage response, so that the target camera point location is updated through the switching of the roaming point.
Scheme four is introduced below:
and responding to the visual angle switching operation of the first roaming point, and controlling the first interactive interface to display a fifth visual angle picture of a third roaming visual angle corresponding to the first roaming point in the live-action picture. In the scheme, the roaming point is not changed, and the roaming view angle corresponding to the roaming point is updated.
Wherein, in response to the view switching operation on the first roaming point, controlling the first interactive interface to display a fifth view picture of a third roaming view corresponding to the first roaming point in the live-action view includes: and responding to the operation of adjusting the roaming visual angle of the first roaming point, and controlling the first interactive interface to display a fifth visual angle picture corresponding to the third roaming visual angle.
When receiving a visual angle switching operation executed on the first roaming point, responding to the visual angle switching operation, and controlling the current roaming visual angle to be switched from the first roaming visual angle corresponding to the first roaming point to a third roaming visual angle corresponding to the first roaming point so as to control the first interactive interface to display a fifth visual angle picture corresponding to the third roaming visual angle. The fifth view frame is a partial frame of a third roaming view corresponding to the first roaming point in the three-dimensional panorama, the third roaming view and the first roaming view are different views corresponding to the first roaming point, and the third roaming view and the second roaming view in the above embodiment (the embodiment in which the first target image is the model map) may be the same view or different views.
And under the condition that the first interactive interface is controlled to display the fifth visual angle picture, adjusting the shooting visual angle corresponding to the target camera point location according to the linkage relation, so that the adjusted shooting visual angle is matched with the fifth visual angle picture, the first interactive interface is controlled to display and update in response to visual angle switching operation of the first roaming point, and the shooting visual angle of the target camera point location is driven to update based on linkage response.
Updating the model map before the shooting angle of view of the target camera point to correspond to the current display state of the vehicle model, and updating the model map after the shooting angle of view of the target camera point to correspond to the third display state of the vehicle model; the first perspective view picture corresponds to a current display state of the live-action vehicle, and the fifth perspective view picture corresponds to a fourth display state of the live-action vehicle.
In the implementation process, the first interactive interface is updated by responding to the visual angle switching operation, and the shooting visual angle of the target camera point is adjusted based on the linkage response, so that the shooting visual angle of the target camera point is updated by switching the roaming visual angles.
It should be noted that, the executing subjects of the steps of the method provided in the foregoing embodiments may be the same device, or different devices may also be used as the executing subjects of the method. For example, the execution subject of steps 101 to 103 may be device a; for another example, the execution subject of steps 101 and 102 may be device a, and the execution subject of step 103 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations occurring in a specific order are included, but it should be clearly understood that these operations may be executed out of order or in parallel as they appear herein, and the sequence numbers of the operations, such as 101, 102, etc., are used merely to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 5 is a schematic structural diagram of an information display apparatus according to an exemplary embodiment of the present application, where the information display apparatus provides a first graphical user interface, content displayed by the first graphical user interface includes a first interactive interface, and a live-action vehicle corresponding to a target vehicle is displayed on the first interactive interface, as shown in fig. 5, the information display apparatus includes: a display module 51, a first switching module 52 and a second switching module 53.
The display module 51 is configured to display, on the second interactive interface, a vehicle model corresponding to the live-action vehicle, where the live-action vehicle and the vehicle model have a linkage relationship, and the vehicle model is obtained by modeling a target vehicle;
a first switching module 52 for switching the vehicle model from the current display state to the first display state in response to a first interactive operation acting on the vehicle model;
and the second switching module 53 is configured to synchronously switch the live-action vehicle from the current display state to a second display state according to the linkage relationship, where the second display state is adapted to the first display state.
In an alternative embodiment, the first switching module 52 is further configured to: switching the live-action vehicle from the current display state to a third display state in response to a second interactive operation acting on the live-action vehicle; the second switching module 53 is further configured to: and synchronously switching the vehicle model from the current display state to a fourth display state according to the linkage relation, wherein the fourth display state is adaptive to the third display state.
In an optional embodiment, the second interactive interface and the first interactive interface are the same interactive interface on the first electronic terminal, and the vehicle model and the live-action vehicle are respectively located in an edge area and a central area of the same interactive interface; or the second interactive interface and the first interactive interface are different interactive interfaces on the first electronic terminal; or the second interactive interface is an interactive interface on a second electronic terminal, and the second electronic terminal is in communication connection with the first electronic terminal.
In an alternative embodiment, the interface size of the second interactive interface is smaller than the interface size of the first interactive interface.
In an optional embodiment, the first interactive operation is a local clicking operation for viewing a local area of the vehicle, and the first switching module 52 is specifically configured to: according to a vehicle local area selected by local clicking operation, acquiring a first local enlarged view corresponding to a vehicle model and a second local enlarged view corresponding to a live-action vehicle; switching a current view of the vehicle model to a first partially enlarged view; the second switching module 53 is specifically configured to: switching the current view of the live-action vehicle into a second local enlarged view according to the linkage relation; wherein the first partial enlarged view is an enlarged view of a partial region of the vehicle on the vehicle model and represents a first display state; the second partial enlarged view is an enlarged view of a partial region of the vehicle on the live-action vehicle and represents a second display state.
In an alternative embodiment, the local region of the vehicle selected by the local clicking operation is a local region of the vehicle exterior or a local region of the vehicle interior.
In an optional embodiment, the first interaction operation is a zoom operation, and the first switching module 52 is specifically configured to: determining a reference scaling according to a scaling track of the scaling operation; performing a first zooming operation on a current view of the vehicle model based on the reference zooming scale; the second switching module 53 is specifically configured to: according to the linkage relation, synchronously carrying out second zooming operation on the current view of the live-action vehicle; and the view obtained by the first zooming operation represents a first display state, and the view obtained by the second zooming operation represents a second display state.
In an alternative embodiment, the first zoom operation is different from the second zoom operation in an actual zoom ratio, which is determined based on the reference zoom ratio and the interface size.
In an optional embodiment, the view obtained by the first zooming operation is a global view of the vehicle model, and the view obtained by the second zooming operation is a third local enlarged view of the live-action vehicle; the information presentation device further includes: a marking module; and the marking module is used for marking a view area corresponding to the vehicle local area displayed by the third local enlarged view in the global view of the vehicle model.
In an alternative embodiment, the first interaction operation is a rotation operation, and the first switching module 52 is specifically configured to: performing first rotation operation on the current view of the vehicle model according to the rotation track of the rotation operation; the second switching module 53 is specifically configured to: according to the linkage relation, synchronously performing second rotation operation on the current view of the live-action vehicle; wherein the view obtained by the first rotation operation represents a first display state, and the view obtained by the second rotation operation represents a second display state.
In an optional embodiment, the information presentation apparatus further comprises: a display module; and the display module is used for displaying the relative rotation azimuth information in real time in the process of carrying out first rotation operation on the current view of the vehicle model.
In an optional embodiment, the display module is further configured to display a perspective switching control associated with the vehicle model; the first switching module 52 is specifically configured to: acquiring a first target visual angle corresponding to visual angle switching operation, and switching the current view of the vehicle model into the view of the vehicle model under the first target visual angle; the second switching module 53 is specifically configured to: synchronously switching the current view of the live-action vehicle into the view of the live-action vehicle under the first target view angle according to the linkage relation; the information presentation device further includes: the broadcasting system comprises an acquisition module and a broadcasting module; the acquisition module is used for acquiring the explanation content adaptive to the first target visual angle; and the broadcasting module is used for broadcasting the explanation content and displaying the text information corresponding to the explanation content, wherein the key information in the text information is highlighted.
In an optional embodiment, the obtaining module is further configured to obtain the voice explanation content for the target vehicle in real time; the information presentation device further includes: the device comprises an identification module, a determination module and a display module; the recognition module is used for carrying out semantic recognition on the voice explanation content; the determining module is used for determining a second target view angle corresponding to the target keyword if the target keyword is identified; the first switching module 52 is specifically configured to: switching the current view of the vehicle model into a view of the vehicle model under a second target view angle; the second switching module 53 is specifically configured to: synchronously switching the current view of the live-action vehicle into the view of the live-action vehicle under the second target view angle according to the linkage relation; the acquisition module is further used for acquiring the explanation content adaptive to the second target visual angle, and the display module is used for highlighting the key information in the explanation content.
In an optional embodiment, the first interactive interface displays a local picture of a live-action map corresponding to a live-action vehicle, the second interactive interface displays a vehicle model corresponding to a target vehicle through a first visual angle picture, the first visual angle picture is a model map corresponding to the vehicle model, a first camera point location of a first display style and at least one second camera point location of a second display style are displayed in the model map, and the first camera point location is a target camera point location; the shooting visual angle corresponding to the target camera point location is matched with a first visual angle picture, and the first visual angle picture corresponds to a first roaming visual angle of the first roaming point.
In an optional embodiment, the first interactive operation is a view switching operation; the first switching module 52 is specifically configured to: in response to the switching operation of the view angle on the model diagram, the target camera point location is updated, and the second switching module 53 is specifically configured to: controlling the first interactive interface to display a second visual angle picture matched with the updated target camera point location according to the linkage relation, wherein the second visual angle picture is a local picture corresponding to a second roaming point in the live-action picture, the model picture after the target camera point location is updated corresponds to a first display state of the vehicle model, and the second visual angle picture corresponds to a second display state of the live-action vehicle; and/or, the first switching module 52 is specifically configured to: responding to the visual angle switching operation of the target camera point location, and adjusting the shooting visual angle corresponding to the target camera point location; the second switching module 53 is specifically configured to: and controlling the first interactive interface to display a third visual angle picture matched with the adjusted shooting visual angle according to the linkage relation, wherein the third visual angle picture is a local picture of a second roaming visual angle corresponding to the first roaming point in the live-action picture, the model picture after the shooting visual angle corresponding to the target camera point position is adjusted corresponds to a first display state of the vehicle model, and the third visual angle picture corresponds to a second display state of the live-action vehicle.
In an optional embodiment, the first switching module 52 is specifically configured to: in response to a first selection operation in at least one second camera point location, updating the target camera point location from the first camera point location to a selected second camera point location; and updating the first camera point location into a second display style, and updating the selected second camera point location into the first display style.
In an optional embodiment, the first switching module 52 is specifically configured to: and responding to the rotation operation of the target camera point location, and adjusting the shooting visual angle corresponding to the target camera point location.
In an optional embodiment, the second interactive operation is a view switching operation, and the first view displays at least one third roaming point; the first switching module 52 is specifically configured to: responding to the view switching operation of the live-action picture, and controlling the first interactive interface to display a fourth view picture corresponding to the third target roaming point in the live-action picture; the second switching module 53 is specifically configured to: updating the target camera point location according to the linkage relation, wherein the updated target camera point location is adapted to a fourth visual angle picture, the fourth visual angle picture corresponds to a third display state of the live-action vehicle, and a model map after updating the target camera point location corresponds to a fourth display state of the vehicle model; and/or the first switching module 52 is specifically configured to: responding to the visual angle switching operation of the first roaming point, and controlling the first interactive interface to display a fifth visual angle picture of a third roaming visual angle corresponding to the first roaming point in the live-action picture; the second switching module 53 is specifically configured to: and adjusting the shooting visual angle corresponding to the target camera point position according to the linkage relation, wherein the adjusted shooting visual angle is matched with a fifth visual angle picture, the fifth visual angle picture corresponds to a third display state of the live-action vehicle, and a model graph after the shooting visual angle corresponding to the target camera point position is adjusted corresponds to a fourth display state of the vehicle model.
In an optional embodiment, the first switching module 52 is specifically configured to: and responding to a second selection operation in at least one third roaming point, and controlling the first interactive interface to display a fourth visual angle picture corresponding to the selected third target roaming point.
In an optional embodiment, the first switching module 52 is specifically configured to: and controlling the first interactive interface to display a fifth visual angle picture corresponding to the third roaming visual angle in response to the operation of adjusting the roaming visual angle of the first roaming point.
Fig. 6 is a schematic structural diagram of an electronic terminal according to an exemplary embodiment of the present application, where the electronic terminal may be implemented as a first electronic terminal in the information presentation method. As shown in fig. 6, the electronic terminal includes: memory 64, processor 65. Further comprising a display 67.
The memory 64 is used for storing computer programs and may be configured to store other various data to support operations on the electronic terminal. Examples of such data include instructions for any application or method operating on the electronic terminal.
The memory 64 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 65, coupled to the memory 64, for executing computer programs in the memory 64 for: providing a first graphical user interface through the display 67, wherein the content displayed by the first graphical user interface includes a first interactive interface, and a live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface; displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling a target vehicle; and responding to the first interactive operation acted on the vehicle model, switching the vehicle model from the current display state to the first display state, and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relation, wherein the second display state is matched with the first display state.
In an alternative embodiment, the processor 65 is further configured to: and responding to a second interactive operation acted on the live-action vehicle, switching the live-action vehicle from the current display state to a third display state, and synchronously switching the vehicle model from the current display state to a fourth display state according to the linkage relation, wherein the fourth display state is matched with the third display state.
In an optional embodiment, the second interactive interface and the first interactive interface are the same interactive interface on the first electronic terminal, and the vehicle model and the live-action vehicle are respectively located in an edge area and a central area of the same interactive interface; or the second interactive interface and the first interactive interface are different interactive interfaces on the first electronic terminal; or the second interactive interface is an interactive interface on a second electronic terminal, and the second electronic terminal is in communication connection with the first electronic terminal.
In an alternative embodiment, the interface size of the second interactive interface is smaller than the interface size of the first interactive interface.
In an optional embodiment, the first interactive operation is a local clicking operation for viewing a local area of the vehicle, and the processor 65 is specifically configured to, when switching the vehicle model from the current display state to the first display state and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relationship: according to a vehicle local area selected by local clicking operation, acquiring a first local enlarged view corresponding to a vehicle model and a second local enlarged view corresponding to a live-action vehicle; switching the current view of the vehicle model into a first local enlarged view, and switching the current view of the live-action vehicle into a second local enlarged view according to the linkage relation; wherein the first partial enlarged view is an enlarged view of a partial region of the vehicle on the vehicle model and represents a first display state; the second partial enlarged view is an enlarged view of a partial region of the vehicle on the live-action vehicle and represents a second display state.
In an alternative embodiment, the local region of the vehicle selected by the local clicking operation is a local region of the vehicle exterior or a local region of the vehicle interior.
In an optional embodiment, if the first interactive operation is a zoom operation, the processor 65 is specifically configured to, when switching the vehicle model from the current display state to the first display state and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relationship: determining a reference scaling according to a scaling track of the scaling operation; based on the reference scaling, carrying out first scaling operation on the current view of the vehicle model, and synchronously carrying out second scaling operation on the current view of the live-action vehicle according to the linkage relation; and the view obtained by the first zooming operation represents a first display state, and the view obtained by the second zooming operation represents a second display state.
In an alternative embodiment, the first zoom operation is different from the second zoom operation in an actual zoom ratio, which is determined based on the reference zoom ratio and the interface size.
In an alternative embodiment, the view obtained by the first zooming operation is a global view of the vehicle model, and the view obtained by the second zooming operation is a third local enlarged view of the live-action vehicle, the processor 65 is further configured to: in the global view of the vehicle model, a view area corresponding to the vehicle local area displayed by the third local enlarged view is marked.
In an optional embodiment, if the first interactive operation is a rotating operation, the processor 65 is specifically configured to, when switching the vehicle model from the current display state to the first display state and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relationship: according to the rotation track of the rotation operation, performing first rotation operation on the current view of the vehicle model, and according to the linkage relation, performing second rotation operation on the current view of the live-action vehicle synchronously; wherein the view obtained by the first rotation operation represents a first display state, and the view obtained by the second rotation operation represents a second display state.
In an alternative embodiment, the processor 65 is further configured to: and displaying the relative rotation azimuth information in real time during the first rotation operation of the current view of the vehicle model.
In an alternative embodiment, the processor 65 is further configured to: displaying a perspective switching control associated with the vehicle model; the first interaction operation is a viewing angle switching operation acting on the viewing angle switching control, and when the vehicle model is switched from the current display state to the first display state and the live-action vehicle is synchronously switched from the current display state to the second display state according to the linkage relationship, the processor 65 is specifically configured to: acquiring a first target visual angle corresponding to visual angle switching operation, switching the current view of the vehicle model into the view of the vehicle model under the first target visual angle, and synchronously switching the current view of the live-action vehicle into the view of the live-action vehicle under the first target visual angle according to the linkage relation; and acquiring the explanation content adaptive to the first target visual angle, broadcasting the explanation content and displaying text information corresponding to the explanation content, wherein key information in the text information is highlighted.
In an alternative embodiment, the processor 65 is further configured to: acquiring voice explanation content aiming at a target vehicle in real time; performing semantic recognition on the voice explanation content, and if a target keyword is recognized, determining a second target view angle corresponding to the target keyword; when the vehicle model is switched from the current display state to the first display state and the live-action vehicle is synchronously switched from the current display state to the second display state according to the linkage relationship, the processor 65 is specifically configured to: switching the current view of the vehicle model into the view of the vehicle model under the second target visual angle, and synchronously switching the current view of the live-action vehicle into the view of the live-action vehicle under the second target visual angle according to the linkage relation; and acquiring the explanation content adapted to the second target visual angle, and highlighting key information in the explanation content.
In an optional embodiment, the first interactive interface displays a local picture of a live-action map corresponding to a live-action vehicle, the second interactive interface displays a vehicle model corresponding to a target vehicle through a first visual angle picture, the first visual angle picture is a model map corresponding to the vehicle model, a first camera point location of a first display style and at least one second camera point location of a second display style are displayed in the model map, and the first camera point location is a target camera point location; the shooting visual angle corresponding to the target camera point location is matched with a first visual angle picture, and the first visual angle picture corresponds to a first roaming visual angle of the first roaming point.
In an alternative embodiment, the processor 65 is further configured to: the first interactive operation is a visual angle switching operation; the processor 65 is specifically configured to perform at least one of the following operations when, in response to the first interactive operation acting on the vehicle model, switching the vehicle model from the current display state to the first display state, and synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relationship: responding to the visual angle switching operation on the model graph, updating the target camera point position, and controlling the first interactive interface to display a second visual angle picture matched with the updated target camera point position according to the linkage relation, wherein the second visual angle picture is a local picture corresponding to a second roaming point in the live-action picture, the model graph after updating the target camera point position corresponds to a first display state of the vehicle model, and the second visual angle picture corresponds to a second display state of the live-action vehicle; responding to the visual angle switching operation of the target camera point location, adjusting the shooting visual angle corresponding to the target camera point location, and controlling the first interactive interface to display a third visual angle picture matched with the adjusted shooting visual angle according to the linkage relation, wherein the third visual angle picture is a local picture of a second roaming visual angle corresponding to the first roaming point in the live-action picture, the model picture after the shooting visual angle corresponding to the target camera point location is adjusted corresponds to the first display state of the vehicle model, and the third visual angle picture corresponds to the second display state of the live-action vehicle.
In an optional embodiment, when the processor 65 updates the target camera point location in response to the view switching operation on the model diagram, the processor is specifically configured to: in response to a first selection operation in at least one second camera point location, updating the target camera point location from the first camera point location to a selected second camera point location; and updating the first camera point location into a second display style, and updating the selected second camera point location into the first display style.
In an optional embodiment, when responding to the view switching operation on the target camera point, the processor 65 is specifically configured to: and responding to the rotation operation of the target camera point position, and adjusting the shooting visual angle corresponding to the target camera point position.
In an optional embodiment, the second interactive operation is a view switching operation, and the first view displays at least one third roaming point; the processor 65 is specifically configured to perform at least one of the following operations when, in response to the second interactive operation performed on the live-action vehicle, the live-action vehicle is switched from the current display state to the third display state, and the vehicle model is synchronously switched from the current display state to the fourth display state according to the linkage relationship: responding to the visual angle switching operation of the live-action image, controlling the first interactive interface to display a fourth visual angle image corresponding to a third target roaming point in the live-action image, and updating a target camera point position according to the linkage relation, wherein the updated target camera point position is matched with the fourth visual angle image, the fourth visual angle image corresponds to a third display state of the live-action vehicle, and a model image after the target camera point position is updated corresponds to a fourth display state of the vehicle model; and in response to the visual angle switching operation of the first roaming point, controlling the first interactive interface to display a fifth visual angle picture corresponding to a third roaming visual angle of the first roaming point in the live-action picture, and adjusting a shooting visual angle corresponding to the target camera point according to the linkage relation, wherein the adjusted shooting visual angle is adapted to the fifth visual angle picture, the fifth visual angle picture corresponds to a third display state of the live-action vehicle, and the model picture corresponding to the shooting visual angle of the target camera point corresponds to a fourth display state of the vehicle model.
In an optional embodiment, the processor 65, when controlling the first interactive interface to display a fourth perspective picture corresponding to the third target roaming point in the live-action view in response to the perspective switching operation on the live-action view, is specifically configured to: and responding to a second selection operation in at least one third roaming point, and controlling the first interactive interface to display a fourth visual angle picture corresponding to the selected third target roaming point.
In an alternative embodiment, the processor 65, when controlling the first interactive interface to display a fifth perspective picture of a third roaming perspective corresponding to the first roaming point in the live view in response to the perspective switching operation on the first roaming point, is specifically configured to: and controlling the first interactive interface to display a fifth visual angle picture corresponding to the third roaming visual angle in response to the operation of adjusting the roaming visual angle of the first roaming point.
According to the electronic terminal, the live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface, the vehicle model corresponding to the target vehicle is displayed on the second interactive interface, the linkage relation exists between the live-action vehicle and the vehicle model, interaction operation is carried out on one of the live-action vehicle or the vehicle model, linkage response can be carried out along with the interaction operation on the other one of the live-action vehicle or the vehicle model according to the linkage relation, a user can conveniently and fast control the live-action vehicle or the vehicle model, and experience of the user is improved.
Further, as shown in fig. 6, the electronic terminal further includes: communication components 66, power components 68, audio components 69, and the like. Only some of the components are schematically shown in fig. 6, and the electronic terminal is not meant to include only the components shown in fig. 6. It should be noted that the components within the dashed line frame in fig. 6 are optional components, not necessary components, and may be determined according to the product form of the electronic terminal.
Accordingly, embodiments of the present application also provide a computer readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps of the method shown in fig. 1.
The communication component of fig. 6 described above is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display in fig. 6 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly of fig. 6 described above provides power to the various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component of fig. 6 described above may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element described by the phrase "comprising a. -" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (19)

1. An information display method is characterized in that a first graphical user interface is provided through a first electronic terminal, the content displayed by the first graphical user interface comprises a first interactive interface, and a live-action vehicle corresponding to a target vehicle is displayed on the first interactive interface, and the method comprises the following steps:
displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling the target vehicle;
responding to a first interactive operation acted on the vehicle model, switching the vehicle model from a current display state to a first display state, and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, wherein the second display state is matched with the first display state; the second interactive interface is an interactive interface on a second electronic terminal, and the second electronic terminal is in communication connection with the first electronic terminal; the interface size of the second interactive interface is smaller than that of the first interactive interface;
the first interactive operation comprises a zooming operation, the vehicle model is switched from a current display state to a first display state, and the live-action vehicle is synchronously switched from the current display state to a second display state according to the linkage relationship, and the method comprises the following steps: determining a reference scaling according to the scaling track of the scaling operation; based on the reference scaling, carrying out first scaling operation on the current view of the vehicle model, and carrying out second scaling operation on the current view of the live-action vehicle synchronously according to the linkage relation; the view obtained by the first zooming operation represents a first display state, and the view obtained by the second zooming operation represents a second display state;
when the actual scaling of the first zooming operation is different from that of the second zooming operation, the view obtained by the first zooming operation is a global view of the vehicle model, and the view obtained by the second zooming operation is a third local enlarged view of the live-action vehicle, the method further includes: marking a view area corresponding to the vehicle local area displayed by the third local enlarged view in the global view of the vehicle model, wherein the actual zoom scale is determined according to the reference zoom scale and the interface size.
2. The method of claim 1, further comprising:
and responding to a second interactive operation acted on the live-action vehicle, switching the live-action vehicle from the current display state to a third display state, and synchronously switching the vehicle model from the current display state to a fourth display state according to the linkage relation, wherein the fourth display state is matched with the third display state.
3. The method according to claim 1, wherein a second interactive interface and the first interactive interface are the same interactive interface on the first electronic terminal, and the vehicle model and the live-action vehicle are respectively located in an edge region and a central region of the same interactive interface; or the second interactive interface and the first interactive interface are different interactive interfaces on the first electronic terminal.
4. The method of claim 1, wherein the first interactive operation comprises a local clicking operation for viewing a local area of the vehicle, and then switching the vehicle model from a current display state to a first display state and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relationship comprises:
according to the vehicle local area selected by the local clicking operation, a first local enlarged view corresponding to the vehicle model and a second local enlarged view corresponding to the live-action vehicle are obtained;
switching the current view of the vehicle model into the first partial enlarged view, and switching the current view of the live-action vehicle into the second partial enlarged view according to the linkage relation;
wherein the first partial enlarged view is an enlarged view of the partial region of the vehicle on the vehicle model and represents the first display state; the second partial magnified view is a magnified view of the vehicle partial region on the live action vehicle and represents the second display state.
5. The method according to claim 4, wherein the local region of the vehicle selected by the local clicking operation is a local region of the vehicle appearance or a local region of the vehicle interior.
6. The method of claim 1, wherein the first interactive operation comprises a rotation operation, and then the switching the vehicle model from the current display state to the first display state and the synchronously switching the live-action vehicle from the current display state to the second display state according to the linkage relationship comprises:
according to the rotating track of the rotating operation, performing first rotating operation on the current view of the vehicle model, and according to the linkage relation, performing second rotating operation on the current view of the live-action vehicle synchronously; wherein the view obtained by the first rotation operation represents a first display state, and the view obtained by the second rotation operation represents a second display state.
7. The method of claim 6, further comprising:
and displaying the relative rotation azimuth information in real time in the process of performing the first rotation operation on the current view of the vehicle model.
8. The method of claim 1, further comprising: displaying a perspective switching control associated with the vehicle model;
the first interactive operation comprises a visual angle switching operation acted on the visual angle switching control, the vehicle model is switched from the current display state to the first display state, and the live-action vehicle is synchronously switched from the current display state to the second display state according to the linkage relationship, and the method comprises the following steps:
acquiring a first target view angle corresponding to the view angle switching operation, switching the current view of the vehicle model into the view of the vehicle model under the first target view angle, and synchronously switching the current view of the live-action vehicle into the view of the live-action vehicle under the first target view angle according to the linkage relation; and
and acquiring explanation content adaptive to the first target visual angle, broadcasting the explanation content and displaying text information corresponding to the explanation content, wherein key information in the text information is highlighted.
9. The method of claim 1, further comprising:
acquiring voice explanation contents aiming at the target vehicle in real time;
performing semantic recognition on the voice explanation content, and if a target keyword is recognized, determining a second target view angle corresponding to the target keyword;
switching the vehicle model from a current display state to a first display state, and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relationship, wherein the method comprises the following steps:
switching the current view of the vehicle model to the view of the vehicle model under the second target view angle, and synchronously switching the current view of the live-action vehicle to the view of the live-action vehicle under the second target view angle according to the linkage relation; and
and acquiring the explanation content adaptive to the second target visual angle, and highlighting key information in the explanation content.
10. The method according to claim 2, wherein the first interactive interface displays a partial image of a live-action map corresponding to the live-action vehicle, the second interactive interface displays a vehicle model corresponding to the target vehicle through a first perspective image, the first perspective image is a model map corresponding to the vehicle model, the model map displays a first camera point location of a first display style and at least one second camera point location of a second display style, and the first camera point location is a target camera point location;
the shooting visual angle corresponding to the target camera point location is matched with a first visual angle picture, and the first visual angle picture corresponds to a first roaming visual angle of a first roaming point.
11. The method of claim 10, wherein the first interaction comprises a perspective switching operation; responding to a first interactive operation acted on the vehicle model, switching the vehicle model from a current display state to a first display state, and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, wherein the method at least comprises one of the following steps:
responding to a visual angle switching operation on the model map, updating the target camera point location, and controlling the first interactive interface to display a second visual angle picture matched with the updated target camera point location according to the linkage relation, wherein the second visual angle picture is a local picture corresponding to a second roaming point in the real scene map, the model map after updating the target camera point location corresponds to a first display state of the vehicle model, and the second visual angle picture corresponds to a second display state of the real scene vehicle;
responding to the visual angle switching operation of the target camera point, adjusting a shooting visual angle corresponding to the target camera point, and controlling the first interactive interface to display a third visual angle picture matched with the adjusted shooting visual angle according to the linkage relation, wherein the third visual angle picture is a local picture of a second roaming visual angle corresponding to the first roaming point in the live-action picture, the model picture after the shooting visual angle corresponding to the target camera point is adjusted corresponds to a first display state of the vehicle model, and the third visual angle picture corresponds to a second display state of the live-action vehicle.
12. The method of claim 11, wherein the updating the target camera point location in response to a perspective switching operation on the model map comprises:
in response to a first selection operation in the at least one second camera point location, updating the target camera point location from the first camera point location to the selected second camera point location;
updating the first camera point location to the second display style, and updating the selected second camera point location to the first display style.
13. The method according to claim 11, wherein the adjusting the shooting angle of view corresponding to the target camera point location in response to the angle of view switching operation on the target camera point location comprises:
and responding to the rotation operation of the target camera point position, and adjusting the shooting visual angle corresponding to the target camera point position.
14. The method according to claim 10, wherein the second interactive operation is a view switching operation, and the first view displays at least one third roaming point;
responding to a second interactive operation acted on the live-action vehicle, switching the live-action vehicle from the current display state to a third display state, and synchronously switching the vehicle model from the current display state to a fourth display state according to the linkage relation, wherein the second interactive operation at least comprises one of the following steps:
responding to the visual angle switching operation of the live-action image, controlling the first interactive interface to display a fourth visual angle image corresponding to a third target roaming point in the live-action image, and updating the target camera point position according to the linkage relation, wherein the updated target camera point position is matched with the fourth visual angle image, the fourth visual angle image corresponds to a third display state of the live-action vehicle, and the model image after updating the target camera point position corresponds to a fourth display state of the vehicle model;
and responding to the visual angle switching operation of the first roaming point, controlling the first interactive interface to display a fifth visual angle picture of a third roaming visual angle corresponding to the first roaming point in the live-action picture, and adjusting a shooting visual angle corresponding to the target camera point according to the linkage relation, wherein the adjusted shooting visual angle is matched with the fifth visual angle picture, the fifth visual angle picture corresponds to a third display state of the live-action vehicle, and the model picture after the shooting visual angle corresponding to the target camera point is adjusted corresponds to a fourth display state of the vehicle model.
15. The method according to claim 14, wherein the controlling the first interactive interface to display a fourth perspective picture corresponding to a third target roaming point in the live-action view in response to the perspective switching operation on the live-action view comprises:
and responding to a second selection operation in the at least one third roaming point, and controlling the first interactive interface to display a fourth visual angle picture corresponding to the selected third target roaming point.
16. The method of claim 14, wherein in response to the perspective switching operation on the first roaming point, controlling the first interactive interface to display a fifth perspective screen of a third roaming perspective corresponding to the first roaming point in the live-action view comprises:
and responding to the operation of adjusting the roaming visual angle of the first roaming point, and controlling the first interactive interface to display a fifth visual angle picture corresponding to the third roaming visual angle.
17. An information display device, wherein the information display device provides a first graphical user interface, content displayed by the first graphical user interface includes a first interactive interface, and a live-action vehicle corresponding to a target vehicle is displayed on the first interactive interface, the information display device includes: the system comprises a display module, a first switching module, a second switching module and a marking module;
the display module is used for displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein a linkage relationship exists between the live-action vehicle and the vehicle model, and the vehicle model is obtained by modeling the target vehicle;
the first switching module is used for responding to a first interactive operation acted on the vehicle model and switching the vehicle model from a current display state to a first display state;
the second switching module is used for synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, and the second display state is matched with the first display state; the second interactive interface is an interactive interface on a second electronic terminal, and the second electronic terminal is in communication connection with the information display device; the interface size of the second interactive interface is smaller than that of the first interactive interface;
the first interaction operation comprises a zoom operation, and the first switching module is specifically configured to: determining a reference scaling according to the scaling track of the scaling operation; performing a first zoom operation on a current view of the vehicle model based on the reference zoom scale; the second switching module is specifically configured to: according to the linkage relation, synchronously carrying out second zooming operation on the current view of the live-action vehicle; the view obtained by the first zooming operation represents a first display state, and the view obtained by the second zooming operation represents a second display state;
under the condition that the actual scaling of the first zooming operation is different from that of the second zooming operation, the view obtained by the first zooming operation is a global view of the vehicle model, the view obtained by the second zooming operation is a third local enlarged view of the live-action vehicle, and the marking module is configured to: marking a view area corresponding to the vehicle local area displayed by the third local enlarged view in the global view of the vehicle model, wherein the actual zoom scale is determined according to the reference zoom scale and the interface size.
18. An electronic terminal, characterized in that it comprises: a memory, a processor, and a display;
the memory for storing a computer program;
the processor, coupled with the memory, to execute the computer program to: providing a first graphical user interface through the display, wherein the content displayed by the first graphical user interface comprises a first interactive interface, and a live-action vehicle corresponding to the target vehicle is displayed on the first interactive interface; displaying a vehicle model corresponding to the live-action vehicle on a second interactive interface, wherein the live-action vehicle and the vehicle model have a linkage relation, and the vehicle model is obtained by modeling the target vehicle; responding to a first interactive operation acted on the vehicle model, switching the vehicle model from a current display state to a first display state, and synchronously switching the live-action vehicle from the current display state to a second display state according to the linkage relation, wherein the second display state is matched with the first display state; the second interactive interface is an interactive interface on a second electronic terminal, and the second electronic terminal is in communication connection with the electronic terminal; the interface size of the second interactive interface is smaller than that of the first interactive interface;
the first interaction operation comprises a zoom operation, and the processor is specifically configured to: determining a reference scaling according to the scaling track of the scaling operation; based on the reference scaling, carrying out first scaling operation on the current view of the vehicle model, and synchronously carrying out second scaling operation on the current view of the live-action vehicle according to the linkage relation; the view obtained by the first zooming operation represents a first display state, and the view obtained by the second zooming operation represents a second display state;
when the actual scaling of the first scaling operation is different from that of the second scaling operation, a view obtained by the first scaling operation is a global view of the vehicle model, and a view obtained by the second scaling operation is a third local enlarged view of the live-action vehicle, the processor is further configured to: marking a view area corresponding to the vehicle local area displayed by the third local enlarged view in the global view of the vehicle model, wherein the actual zoom scale is determined according to the reference zoom scale and the interface size.
19. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 16.
CN202111682545.1A 2021-12-10 2021-12-10 Information display method, equipment, device and storage medium Active CN114371898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111682545.1A CN114371898B (en) 2021-12-10 2021-12-10 Information display method, equipment, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111682545.1A CN114371898B (en) 2021-12-10 2021-12-10 Information display method, equipment, device and storage medium

Publications (2)

Publication Number Publication Date
CN114371898A CN114371898A (en) 2022-04-19
CN114371898B true CN114371898B (en) 2022-11-22

Family

ID=81142382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111682545.1A Active CN114371898B (en) 2021-12-10 2021-12-10 Information display method, equipment, device and storage medium

Country Status (1)

Country Link
CN (1) CN114371898B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115033133B (en) * 2022-05-13 2023-03-17 北京五八信息技术有限公司 Progressive information display method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581618A (en) * 2020-12-23 2021-03-30 深圳前海贾维斯数据咨询有限公司 Three-dimensional building model and real scene comparison method and system in building engineering industry

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110141869A (en) * 2019-04-11 2019-08-20 腾讯科技(深圳)有限公司 Method of controlling operation thereof, device, electronic equipment and storage medium
CN112182433A (en) * 2020-09-25 2021-01-05 瑞庭网络技术(上海)有限公司 Display switching method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581618A (en) * 2020-12-23 2021-03-30 深圳前海贾维斯数据咨询有限公司 Three-dimensional building model and real scene comparison method and system in building engineering industry

Also Published As

Publication number Publication date
CN114371898A (en) 2022-04-19

Similar Documents

Publication Publication Date Title
KR101657120B1 (en) Mobile terminal and Method for displaying image thereof
KR102145190B1 (en) Mobile terminal and control method thereof
KR20220130197A (en) Filming method, apparatus, electronic equipment and storage medium
KR102080746B1 (en) Mobile terminal and control method thereof
CN106791893A (en) Net cast method and device
CN105222802A (en) navigation, navigation video generation method and device
US20200380724A1 (en) Personalized scene image processing method, apparatus and storage medium
KR20130122334A (en) Mobile terminal and control method thereof
CN106227419A (en) Screenshotss method and device
KR20180131908A (en) Mobile terminal and method for controlling the same
CN107515669A (en) Display methods and device
CN114371898B (en) Information display method, equipment, device and storage medium
CN107027041B (en) Scene display method and device
CN109582134B (en) Information display method and device and display equipment
CN107729530A (en) Map Switch method and device
CN113225489B (en) Image special effect display method and device, electronic equipment and storage medium
CN112783316A (en) Augmented reality-based control method and apparatus, electronic device, and storage medium
CN113900510A (en) Vehicle information display method, device and storage medium
WO2024051556A1 (en) Wallpaper display method, electronic device and storage medium
CN111538451A (en) Weather element display method and device and storage medium
KR101872861B1 (en) Mobile terminal and control method therof
KR101667585B1 (en) Mobile terminal and object information display method thereof
CN112714256B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN104461303A (en) Method and device for adjusting interfaces
CN114089890A (en) Vehicle driving simulation method, device, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant