CN115061762A - Page display method and device, electronic equipment and medium - Google Patents

Page display method and device, electronic equipment and medium Download PDF

Info

Publication number
CN115061762A
CN115061762A CN202210639477.9A CN202210639477A CN115061762A CN 115061762 A CN115061762 A CN 115061762A CN 202210639477 A CN202210639477 A CN 202210639477A CN 115061762 A CN115061762 A CN 115061762A
Authority
CN
China
Prior art keywords
page
vehicle
target
display
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210639477.9A
Other languages
Chinese (zh)
Inventor
左声勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210639477.9A priority Critical patent/CN115061762A/en
Publication of CN115061762A publication Critical patent/CN115061762A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a page display method, a page display device, electronic equipment and a page display medium, relates to the technical field of computers, and particularly relates to the technical field of car networking, automatic driving, intelligent cabins, small programs and cloud services. The specific implementation scheme is as follows: determining at least one target in-vehicle display from the candidate in-vehicle displays in response to at least one interactive operation; generating a data acquisition instruction according to the interactive operation, and acquiring page data of a page to be displayed according to the data acquisition instruction; rendering the page data, and sending a rendering result to the target vehicle-mounted display to enable the target vehicle-mounted display to display the rendering result. The display method and the display device have the advantages that the effect that the page to be displayed is dynamically displayed on different vehicle-mounted displays is achieved, and the requirement that a user carries out multi-screen watching on the page to be displayed is met.

Description

Page display method and device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to the field of smart cars, smart cabins, applets, and cloud services, and in particular, to a method and an apparatus for displaying a page, an electronic device, and a medium.
Background
With the development of automobile technology, an automobile is no longer just a transportation means, and has more and more entertainment functions. For example, there are more and more displays in a vehicle that can interact with a user, and currently, more popular interactive displays include a center control display, a copilot display, a rear-row entertainment display, and the like.
At present, different configurations of the same vehicle type can be loaded with different numbers of displays in a differentiated mode.
Disclosure of Invention
The disclosure provides a method, a device, electronic equipment and a medium for realizing multi-screen display of a page by a vehicle-mounted display.
According to an aspect of the present disclosure, a method for displaying a page is provided, including:
determining at least one target in-vehicle display from the candidate in-vehicle displays in response to at least one interactive operation;
generating a data acquisition instruction according to the interactive operation, and acquiring page data of a page to be displayed according to the data acquisition instruction;
rendering the page data, and sending a rendering result to the target vehicle-mounted display to enable the target vehicle-mounted display to display the rendering result.
According to another aspect of the present disclosure, there is provided a display device for a page, including:
the information acquisition module is used for responding to at least one interactive operation and determining at least one target vehicle-mounted display from the candidate vehicle-mounted displays;
the data acquisition module is used for generating a data acquisition instruction according to the interactive operation and acquiring page data of a page to be displayed according to the data acquisition instruction;
and the data rendering module is used for rendering the page data and sending a rendering result to the target vehicle-mounted display so that the target vehicle-mounted display can display the rendering result.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of the present disclosure.
According to another aspect of the disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the method of any one of the present disclosure.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a method of displaying some pages disclosed in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of displaying additional pages disclosed in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of displaying additional pages disclosed in accordance with an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of displaying additional pages disclosed in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a display device of some pages disclosed according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing the method for displaying a page disclosed in the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
At present, different configurations of the same vehicle type can be loaded with different numbers of displays in a differentiated mode. For example, a low-profile vehicle is usually equipped with only a center monitor for the driver to view, while a high-profile vehicle is usually equipped with a front passenger monitor for the front passenger to view, and even a rear passenger monitor for the rear passenger to view, in addition to the center monitor.
In order for automobile manufacturers to meet the requirements of users of low-mix models and high-mix models for using the vehicle-mounted display, certain functions are limited to be used on the most basic central control display. Taking a vehicle-mounted applet as an example, most of the existing vehicle-mounted applets are limited to be displayed only on a central control display through an Activity component, and because the Activity component does not support a multi-screen rendering function, a user cannot use the vehicle-mounted applet on a secondary driving display or a rear row display, so that the use experience of the vehicle-mounted applet by the user in the vehicle is greatly influenced. Therefore, how to meet the requirement of a user in a vehicle for multi-screen watching of the vehicle-mounted applet becomes a problem which needs to be solved urgently.
Fig. 1 is a flowchart of a method for displaying some pages according to an embodiment of the present disclosure, where the embodiment may be applied to a case where a page to be displayed is displayed in multiple screens through at least one vehicle-mounted display. The method of the embodiment can be executed by a device for displaying a page disclosed in the embodiment of the present disclosure, and the device can be implemented by software and/or hardware and can be integrated on any electronic device with computing capability.
As shown in fig. 1, the method for displaying a page disclosed in this embodiment may include:
and S101, responding to at least one interactive operation, and determining at least one target vehicle-mounted display from the candidate vehicle-mounted displays.
And the interactive operation is carried out by at least one user in the vehicle aiming at the candidate vehicle-mounted display and is used for controlling the page content displayed by the candidate vehicle-mounted display. The type of the interactive operation includes, but is not limited to, a touch operation, a voice operation, and the like, for example, a user may click in a page displayed by the candidate in-vehicle display in a touch manner to control the candidate in-vehicle display to skip to display different page contents; for another example, the user may perform a voice operation to perform remote voice control on the candidate in-vehicle display to control the candidate in-vehicle display to skip to show different page contents.
Candidate on-board displays represent display screens provided in the vehicle for visual presentation of information to a user in the vehicle, such as presentation of electronic navigation, presentation of movie screens, presentation of game screens, or the like, and include, but are not limited to, a center control display provided in the middle of the vehicle instrument panel, a copilot display provided in front of the vehicle copilot, a rear-row display provided in the rear row of the vehicle, and the like. The present embodiment does not limit the specific setting manner of the candidate display. Each candidate vehicle-mounted display is connected with a controller of the vehicle, wherein the controller can be a System On Chip (SOC) of the vehicle.
In one embodiment, when a user performs a touch operation on at least one candidate vehicle-mounted display, for example, clicks on at least one candidate vehicle-mounted display, the controller monitors a change in a pressure value applied to the at least one candidate vehicle-mounted display, and then takes the at least one candidate vehicle-mounted display as a target vehicle-mounted display.
In another embodiment, the controller is connected to a sound pickup device in the vehicle, such as a microphone, and performs voice information for acquiring voice operations performed by the user through the sound pickup device. The controller acquires voice information of voice operation performed by a user, and predicts a distance value between the user and the sound pickup apparatus according to a volume value of the voice information of the voice operation, wherein when the volume value of the user belongs to a normal volume value interval, a larger volume value indicates a closer distance value between the user and the sound pickup apparatus, and a smaller volume value indicates a farther distance value between the user and the sound pickup apparatus.
The controller determines the position of the user in the vehicle according to the distance value, for example, the user is in main driving, auxiliary driving, rear row of main driving, rear row of auxiliary driving and the like according to the distance value. Determining at least one target vehicle-mounted display from the candidate vehicle-mounted displays according to the position of the user in the vehicle, for example, if the user is determined to be in main driving, taking the central control display as the target vehicle-mounted display; for another example, if it is determined that the user is in copilot, the copilot display is taken as the target in-vehicle display; for another example, when it is determined that the user is in the rear of the main driving, the rear display of the rear of the main driving is set as the target in-vehicle display.
By responding to at least one interactive operation and determining at least one target vehicle-mounted display from the candidate vehicle-mounted displays, the effect of determining the target vehicle-mounted display for displaying the page to be displayed based on the interactive operation is realized, and a data base is laid for subsequently controlling the target vehicle-mounted display to display the page to be displayed.
S102, generating a data acquisition instruction according to the interactive operation instruction, and acquiring page data of the page to be displayed according to the data acquisition instruction.
The page to be displayed represents a page which needs to be displayed in the target vehicle-mounted display in response to the interactive operation, and the page includes but is not limited to an application page, a webpage, an applet page and the like. The page data represents the page data to be rendered contained in the page to be shown.
In one implementation mode, the controller determines position information of a target vehicle-mounted display acted by a touch operation of a user, determines a current page of the target vehicle-mounted display of the user and a target page component which the user wants to touch and click according to the position information, and finally generates a data acquisition instruction according to the target page component, wherein the data acquisition instruction is used for acquiring page data of a page corresponding to the target page component as page data of a page to be displayed.
For example, if the user clicks the open button of the "XX applet" on the current page of the target vehicle-mounted display, the open button of the "XX applet" is used as the target page component, and the default initial page of the "XX applet" is used as the page to be displayed.
In another embodiment, the controller performs voice recognition on voice information of the voice operation, determines a current page of the target vehicle-mounted display according to a voice recognition result, and a target page component which a user wants to select, and finally generates a data acquisition instruction according to the target page component, wherein the data acquisition instruction is used for acquiring page data of a page corresponding to the target page component according to the data acquisition instruction and is used as page data of a page to be displayed.
For example, if the voice command is "please open XX applet", the voice command is subjected to voice recognition, and the open button of the "XX applet" is used as the target page component, and the default initial page of the "XX applet" is used as the page to be displayed.
The data acquisition instruction is generated according to the interactive operation, and the page data of the page to be displayed is acquired according to the data acquisition instruction, so that the effect of determining the page to be displayed which a user wants to watch is achieved, and a data basis is laid for displaying the page to be displayed by the vehicle-mounted display of the subsequent control target.
S103, rendering the page data, and sending a rendering result to the target vehicle-mounted display, so that the target vehicle-mounted display displays the rendering result.
In one embodiment, the controller renders the page data, for example, by a page rendering component in a page rendering layer, and generates rendering frame data as a rendering result. And then sending the rendering frame data to the target vehicle-mounted display, so that the target vehicle-mounted display displays the rendering frame data after acquiring the rendering frame data. For example, the controller acquires a display identifier corresponding to the target vehicle-mounted display, and sends rendering frame data to the target vehicle-mounted display through the page rendering component according to the display identifier.
According to the method and the device, at least one target vehicle-mounted display is determined from the candidate vehicle-mounted displays in response to at least one interactive operation, a data acquisition instruction is generated according to the interactive operation, page data of a page to be displayed is acquired according to the data acquisition instruction, then the page data is rendered, and a rendering result is sent to the target vehicle-mounted display, so that the target vehicle-mounted display displays the rendering result, the effect of dynamically displaying the page to be displayed on different vehicle-mounted displays is achieved, and the requirement of a user for multi-screen viewing of the page to be displayed is met.
Fig. 2 is a flowchart of a method for displaying other pages disclosed according to an embodiment of the present disclosure, which is further optimized and expanded based on the above technical solution, and can be combined with the above optional embodiments.
As shown in fig. 2, the method for displaying a page disclosed in this embodiment may include:
s201, responding to at least one interactive operation instruction, and determining at least one target vehicle-mounted display from the candidate vehicle-mounted displays.
S202, generating a data acquisition instruction according to the interactive operation, and acquiring page data of the page to be displayed according to the data acquisition instruction.
S203, rendering the page data by using the page rendering component, generating a rendering result, and sending the rendering result to the target vehicle-mounted display by using the page rendering component according to the identification information of the target vehicle-mounted display.
The identification information represents a display identification of the vehicle-mounted display, and the identification information has uniqueness, namely the unique corresponding vehicle-mounted display can be determined according to the identification information. The page rendering component is any rendering component with a multi-screen rendering function.
In one embodiment, the controller reads a configuration file of the target in-vehicle display and obtains identification information of the target in-vehicle display from the configuration file. The controller packages the identification information and the page data and inputs the packaged data into a page rendering component in the page rendering layer together. And the page rendering component renders the page data, generates rendering frame data as a rendering result, and sends the rendering frame data to the target vehicle-mounted display according to the identification information, so that the target vehicle-mounted display displays the rendering frame data.
For example, assuming that the target vehicle-mounted Display is a central control Display, identification information of the central control Display is "Display 1", the controller packages "Display 1" and the acquired page data, and inputs the packaged data into a page rendering component in a page rendering layer together. The page rendering component renders the page data to generate rendering frame data, and sends the rendering frame data to the central control Display according to the Display1, so that the central control Display displays the rendering frame data.
For example, assuming that the target vehicle-mounted Display is a pilot Display, the identification information of the pilot Display is "Display 2", the controller packages "Display 2" and the acquired page data, and inputs the packaged data into the page rendering component in the page rendering layer together. The page rendering component renders the page data to generate rendering frame data, and sends the rendering frame data to the copilot Display according to the Display2, so that the copilot Display displays the rendering frame data.
The page data are rendered by the page rendering component to generate a rendering result, and the rendering result is sent to the target vehicle-mounted display by the page rendering component according to the identification information of the target vehicle-mounted display, so that the effects of rendering and transmitting the page data by the page rendering component according to the identification information are achieved, the accuracy of rendering result transmission is ensured due to the uniqueness of the identification information, and the problem of error page display position caused by wrong sending of the rendering result is avoided.
Optionally, the page rendering component includes a Presentation component.
The Presentation component is a component supporting a multi-screen rendering function in the android system, and can perform screen rendering in other screens accessed by the android system besides the capability of performing screen rendering in a main screen.
In one embodiment, the controller is loaded with the android system in advance, the central control display is set as a main screen, and other candidate vehicle-mounted displays except the central control display are set as other screens. And then the Presentation component can send the rendering result to the central control display for page display, and can also send the rendering result to other candidate vehicle-mounted displays for page display.
The page rendering component comprises the Presentation component, and the Presentation component has the characteristic of supporting a multi-screen rendering function, so that the situation that the page to be displayed is dynamically displayed on different vehicle-mounted displays can be guaranteed smoothly.
Fig. 3 is a flowchart of a method for displaying other pages disclosed according to an embodiment of the present disclosure, which is further optimized and expanded based on the above technical solution, and can be combined with the above optional embodiments.
As shown in fig. 3, the method for displaying a page disclosed in this embodiment may include:
s301, determining a touch pressure value corresponding to the acquired at least one touch operation.
In one embodiment, a pressure sensor inside each candidate vehicle-mounted display detects a touch pressure value corresponding to the touch operation in real time and sends the touch pressure value to the controller. The controller acquires a touch pressure value corresponding to at least one touch operation.
And S302, taking the candidate vehicle-mounted display acted by the touch operation as a target vehicle-mounted display under the condition that the touch pressure value is larger than the pressure value threshold.
In one embodiment, the controller compares a touch pressure value corresponding to at least one touch operation with a preset pressure value threshold, and takes the touch operation with the touch pressure value greater than the pressure value threshold as a target touch operation. And then the candidate vehicle-mounted display acted by the target touch operation is used as the target vehicle-mounted display.
For example, assume that touch operation a, touch operation B, and touch operation C are included. The touch pressure value corresponding to the touch operation a is 1N, the touch pressure value corresponding to the touch operation B is 1.5N, the touch pressure value corresponding to the touch operation C is 2N, and the pressure value threshold is 1.3N. The touch operation A is applied to the center control display, the touch operation B is applied to the assistant driving display, and the touch operation C is applied to the rear row display.
And taking the touch operation B and the touch operation C as target touch operations, taking the pilot display as a target vehicle-mounted display corresponding to the touch operation B, and taking the rear-row display as a target vehicle-mounted display corresponding to the touch operation C.
By determining the touch pressure value corresponding to the touch operation, under the condition that the touch pressure value is larger than the pressure value threshold value, the candidate vehicle-mounted display acted by the touch operation is used as the target vehicle-mounted display, the effect of determining the target vehicle-mounted display in response to the touch operation is achieved, the touch pressure value is compared with the pressure value threshold value, the situation that a user is mistakenly touched to cause meaningless determination of the target vehicle-mounted display is prevented, efficiency is improved, and computing resources are saved.
And S303, determining corresponding touch coordinate information of the touch operation in the target vehicle-mounted display.
In one embodiment, the target in-vehicle display uses position coordinates of a touch position to which a touch operation is applied as touch coordinate information. And sending the touch coordinate information to the controller. For example, if the user performs a touch operation at (X, Y) of the target in-vehicle display, that is, (X, Y) is a touch position, then (X, Y) is taken as touch coordinate information.
The touch coordinate information may be a position coordinate set of the touch position, or a position coordinate of a center or a centroid of the touch position.
S304, determining a target page component from the candidate page components according to the touch coordinate information and the incidence relation between the coordinate information in the target vehicle-mounted display and the candidate page components.
Wherein the candidate page components represent page components included in a current presentation page of the target in-vehicle display. Candidate page components include, but are not limited to, a page jump component, a date component, a menu component, a form component, or a drop down box component, among others.
In one embodiment, the controller establishes an association relationship between the coordinate information in the target vehicle-mounted display and the candidate page components according to the coordinate information of each candidate page component in the target vehicle-mounted display. And matching the touch coordinate information and the incidence relation between the coordinate information and the candidate page components, and determining a target page component from the candidate page components according to a matching result.
For example, assuming that the touch coordinate information is (X, Y), and the coordinate information (X, Y) has an association relationship with the candidate page component a, the candidate page component a is taken as the target page component.
S305, generating a data acquisition instruction according to the component identifier of the target page component, sending the data acquisition instruction carrying the component identifier to a server, and acquiring page data of a page to be displayed from the server; and the page to be displayed is a jump page corresponding to the target page component.
The component identification has uniqueness, namely, the unique corresponding page component can be determined according to the component identification.
In one embodiment, the controller obtains a component identifier corresponding to a target page component, and generates a data obtaining instruction carrying the component identifier according to the component identifier. The controller sends the data acquisition instruction to the server, the server analyzes the data acquisition instruction, determines a jump page corresponding to the target page component according to the component identifier obtained by analysis, and sends page data of the jump page to the controller. And the controller receives the page data of the jump page as the page data of the page to be displayed.
For example, assuming that the target page component is a "user center" button, a data acquisition instruction is generated according to the component identifier of the "user center" and is sent to the server, and then page data of a jump page corresponding to the "user center" is acquired from the server and is used as page data of a page to be displayed.
In another embodiment, the controller acquires page data of a jump page corresponding to the target page component from locally pre-stored page data according to the component identifier corresponding to the target page component, and the page data is used as page data of a page to be displayed.
Determining a target page component from the candidate page components by determining touch coordinate information corresponding to touch operation in a target vehicle-mounted display, according to the touch coordinate information and an incidence relation between the coordinate information in the target vehicle-mounted display and the candidate page components, generating a data acquisition instruction according to a component identifier of the target page component, sending the data acquisition instruction carrying the component identifier to a server, and acquiring page data of a page to be displayed from the server; the page to be displayed is the jump page corresponding to the target page component, so that the page data of the page to be displayed is correspondingly acquired based on the touch coordinate information corresponding to the touch operation, the acquired page data is ensured to be the page data of the page which the user wants to watch, and the accuracy of acquiring the page data is improved.
S306, rendering the page data, and sending the rendering result to the target vehicle-mounted display to enable the target vehicle-mounted display to display the rendering result.
Fig. 4 is a flowchart of a method for displaying other pages disclosed according to an embodiment of the present disclosure, which is further optimized and expanded based on the above technical solution, and can be combined with the above optional embodiments.
As shown in fig. 4, the method for displaying a page disclosed in this embodiment may include:
s401, determining a voice volume value corresponding to at least one acquired voice operation, and determining a target position of a target user in a vehicle according to the voice volume value; the target user is a user who performs voice operation.
Wherein the target position represents a seat position inside the vehicle, including but not limited to primary, secondary, rear primary, or rear secondary, etc.
In one implementation, the sound pickup apparatus in the vehicle acquires voice information of the voice operation, determines a voice volume value corresponding to the voice information, and sends the voice information and the voice volume value of each voice operation to the controller. The controller determines the target position of the target user in the vehicle according to the voice volume value and the pre-calibrated voice volume value range of the voice operation of the user at each candidate position.
Specifically, the volume value range of the voice operation performed by the user at each candidate position is calibrated in advance according to the distance between each candidate position and the sound pickup apparatus. For example, assuming that the main driving is closer to the sound pickup apparatus than the rear row of the main driving, the volume value range of the user in the main driving is (a1, a2) and the volume value range of the user in the rear row of the main driving is (A3, a4), where a1 > a4, are calibrated in advance.
And the controller matches the voice volume value with each voice volume value range according to the acquired voice volume value, and takes the candidate position corresponding to the matched voice volume value range as the target position of the target user in the vehicle.
Illustratively, it is assumed that the volume value range in which the user performs the voice operation in the main driving is previously calibrated to [1db, 5db ], the volume value range in which the user performs the voice operation in the sub-driving is previously calibrated to [5db, 10db ], the volume value range in which the user performs the voice operation in the rear row of the main driving is previously calibrated to [10db, 15db ], and the volume value range in which the user performs the voice operation in the rear row of the sub-driving is previously calibrated to [15db, 20db ]. And if the voice volume value corresponding to the voice information of the voice operation is 12db, determining that the user performing the voice operation is in the main driving rear row in the vehicle.
S402, determining the target vehicle-mounted display from the candidate vehicle-mounted displays according to the target position and the incidence relation between the candidate position and the candidate vehicle-mounted displays.
And according to the use habit of the user, the incidence relation between the candidate position and the candidate vehicle-mounted display is pre-established. For example, if a user in the co-driver normally controls only the co-driver display, an association between the co-driver and the co-driver display is established; for example, when the user in the rear passenger compartment normally controls only the rear passenger compartment display, the relationship between the rear passenger compartment display and the rear passenger compartment display is established.
In one embodiment, the target location is matched with an association between the candidate location and the candidate in-vehicle display, and the target in-vehicle display is determined from the candidate in-vehicle displays according to the matching result.
For example, assuming that the target position is the main driving and the main driving has an association relationship with the central control display, the central control display is taken as the target vehicle-mounted display.
Determining a voice volume value corresponding to voice operation, and determining a target position of a target user in a vehicle according to the voice volume value; the target vehicle-mounted display is determined from the candidate vehicle-mounted displays according to the target position and the incidence relation between the candidate position and the candidate vehicle-mounted display, so that the effect of determining the target vehicle-mounted display corresponding to each voice operation based on the voice volume value is realized, the vehicle-mounted display which the user wants to watch can be still determined when the interactive operation is the contactless voice operation, and the position accuracy of subsequent page data rendering is ensured.
And S403, performing voice recognition on voice information operation of the voice operation, and determining a target page component from candidate page components in the target vehicle-mounted display according to a voice recognition result.
The voice information represents a voice instruction carried by the voice operation.
In one embodiment, the controller performs voice recognition on voice information of the voice operation, performs intention recognition according to a voice recognition result, and determines a target page component which the user wants to select.
For example, assuming that the voice information of the voice operation is "please open XX applet", the voice information of the voice operation is subjected to voice recognition, and intention recognition is performed according to a voice recognition result, so that the open button of the XX applet is used as a target page component, and a default initial page of the XX applet is used as a page to be displayed.
S404, generating a data acquisition instruction according to the component identifier of the target page component, sending the data acquisition instruction carrying the component identifier to a server, and acquiring page data of a page to be displayed from the server; and the page to be displayed is a jump page corresponding to the target page component.
By carrying out voice recognition on voice information of voice operation, determining a target page component from candidate page components in a target vehicle-mounted display according to a voice recognition result, and generating a data acquisition instruction according to a component identifier of the target page component, page data of a page to be displayed is correspondingly acquired based on the voice recognition result of the voice information, the acquired page data is ensured to be the page data which a user wants to watch, the accuracy of acquiring the page data is improved, the requirement of remotely carrying out voice control on the user is met, and the user experience is improved.
S405, rendering the page data, and sending a rendering result to the target vehicle-mounted display to enable the target vehicle-mounted display to display the rendering result.
On the basis of the above embodiment, the page to be presented includes an applet page.
The applet is an application which can be used without downloading and installing, and a user can open the applet through the host application by executing a scanning instruction or a searching instruction through the host application. In this embodiment, the applet page is a vehicle-mounted applet page, and the host application is an application installed in the car machine system.
By setting the page to be displayed to comprise the small program page, the requirement that a user carries out multi-screen watching on the small program page in a vehicle-mounted scene is met, and the use experience of the user on the small program in the vehicle-mounted scene is improved.
Fig. 5 is a schematic structural diagram of a display device for some pages disclosed in an embodiment of the present disclosure, which may be applied to a case where multiple-screen display is performed on a page to be displayed through at least one vehicle-mounted display. The device of the embodiment can be implemented by software and/or hardware, and can be integrated on any electronic equipment with computing capability.
As shown in fig. 5, the presentation apparatus 50 of the page disclosed in this embodiment may include a display determination module 51, a data acquisition module 52, and a data rendering module 53, wherein:
a display determination module 51 for determining at least one target in-vehicle display from the candidate in-vehicle displays in response to at least one interactive operation;
the data acquisition module 52 is configured to generate a data acquisition instruction according to the interactive operation, and acquire page data of a page to be displayed according to the data acquisition instruction;
and the data rendering module 53 is configured to render the page data, and send a rendering result to the target vehicle-mounted display, so that the target vehicle-mounted display displays the rendering result.
Optionally, the data rendering module 53 is specifically configured to:
rendering the page data by using the page rendering component to generate a rendering result;
and sending the rendering result to the target vehicle-mounted display by utilizing the page rendering component according to the identification information of the target vehicle-mounted display.
Optionally, the page rendering component includes a Presentation component.
Optionally, the interactive operation includes a touch operation;
the display determination module 51 is specifically configured to:
determining a touch pressure value corresponding to the touch operation instruction;
and under the condition that the touch pressure value is larger than the pressure value threshold, taking the candidate vehicle-mounted display acted by the touch operation instruction as the target vehicle-mounted display.
Optionally, the interactive operation comprises a voice operation;
the display determination module 51 is specifically configured to:
determining a voice volume value corresponding to the voice operation, and determining a target position of a target user in the vehicle according to the voice volume value; the target user is a user for implementing a voice instruction;
and determining the target vehicle-mounted display from the candidate vehicle-mounted displays according to the target position and the incidence relation between the candidate position and the candidate vehicle-mounted displays.
Optionally, the data obtaining module 52 is specifically configured to:
determining corresponding touch coordinate information of touch operation in a target vehicle-mounted display;
determining a target page component from the candidate page components according to the touch coordinate information and the incidence relation between the coordinate information in the target vehicle-mounted display and the candidate page components;
and generating a data acquisition instruction according to the component identifier of the target page component.
Optionally, the data obtaining module 52 is further specifically configured to:
performing voice recognition on voice information of the voice operation, and determining a target page component from candidate page components in a target vehicle-mounted display according to a voice recognition result;
and generating a data acquisition instruction according to the component identifier of the target page component.
Optionally, the data obtaining module 52 is further specifically configured to:
sending a data acquisition instruction carrying the component identifier to a server;
acquiring page data of a page to be displayed from a server; and the page to be displayed is a jump page corresponding to the target page component.
Optionally, the page to be displayed includes an applet page.
The page display device 50 disclosed in the embodiment of the present disclosure can perform the page display method disclosed in the embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method. Reference may be made to the description in the method embodiments of the present disclosure for details that are not explicitly described in this embodiment.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the customs of public sequences.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as the presentation method of a page. For example, in some embodiments, the method of presenting pages may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method of presenting pages described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method of presentation of the page by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (21)

1. A page display method comprises the following steps:
determining at least one target in-vehicle display from the candidate in-vehicle displays in response to at least one interactive operation;
generating a data acquisition instruction according to the interactive operation, and acquiring page data of a page to be displayed according to the data acquisition instruction;
rendering the page data, and sending a rendering result to the target vehicle-mounted display to enable the target vehicle-mounted display to display the rendering result.
2. The method of claim 1, wherein the rendering the page data and sending the rendering results to the target in-vehicle display comprises:
rendering the page data by using a page rendering component to generate a rendering result;
and sending the rendering result to the target vehicle-mounted display by utilizing the page rendering component according to the identification information of the target vehicle-mounted display.
3. The method of claim 2, wherein the page rendering component comprises a Presentation component.
4. The method of claim 1, wherein the interactive operation comprises a touch operation;
the determining at least one target vehicle-mounted display from the candidate vehicle-mounted displays in response to at least one interactive operation comprises:
determining a touch pressure value corresponding to the touch operation;
and taking the candidate vehicle-mounted display acted by the touch operation as a target vehicle-mounted display under the condition that the touch pressure value is larger than a pressure value threshold value.
5. The method of claim 1, wherein the interactive operation comprises a voice operation;
the determining at least one target in-vehicle display from the candidate in-vehicle displays in response to at least one interactive operation comprises:
determining a voice volume value corresponding to the voice operation, and determining a target position of a target user in a vehicle according to the voice volume value; wherein the target user is a user who performs the voice operation;
and determining a target vehicle-mounted display from the candidate vehicle-mounted displays according to the target position and the incidence relation between the candidate position and the candidate vehicle-mounted displays.
6. The method of claim 4, wherein the generating data acquisition instructions according to the interaction comprises:
determining touch coordinate information corresponding to the touch operation in the target vehicle-mounted display;
determining a target page component from the candidate page components according to the touch coordinate information and the incidence relation between the coordinate information in the target vehicle-mounted display and the candidate page components;
and generating a data acquisition instruction according to the component identifier of the target page component.
7. The method of claim 5, wherein the generating data acquisition instructions according to the interaction comprises:
performing voice recognition on the voice information of the voice operation, and determining a target page component from candidate page components in the target vehicle-mounted display according to a voice recognition result;
and generating a data acquisition instruction according to the component identifier of the target page component.
8. The method according to claim 6 or 7, wherein the obtaining page data of the page to be displayed according to the data obtaining instruction comprises:
sending a data acquisition instruction carrying the component identifier to a server;
acquiring page data of a page to be displayed from the server; and the page to be displayed is a jump page corresponding to the target page component.
9. The method of any of claims 1-8, wherein the page to be presented comprises an applet page.
10. A device for displaying pages, comprising:
a display determination module for determining at least one target in-vehicle display from the candidate in-vehicle displays in response to at least one interactive operation;
the data acquisition module is used for generating a data acquisition instruction according to the interactive operation and acquiring page data of a page to be displayed according to the data acquisition instruction;
and the data rendering module is used for rendering the page data and sending a rendering result to the target vehicle-mounted display so that the target vehicle-mounted display can display the rendering result.
11. The apparatus of claim 10, wherein the data rendering module is specifically configured to:
rendering the page data by using a page rendering component to generate a rendering result;
and sending the rendering result to the target vehicle-mounted display by utilizing the page rendering component according to the identification information of the target vehicle-mounted display.
12. The apparatus of claim 11, wherein the page rendering component comprises a Presentation component.
13. The apparatus of claim 10, wherein the interoperation instructions comprise touch operations;
the display determination module is specifically configured to:
determining a touch pressure value corresponding to the touch operation;
and taking the candidate vehicle-mounted display acted by the touch operation as a target vehicle-mounted display under the condition that the touch pressure value is larger than a pressure value threshold value.
14. The apparatus of claim 10, wherein the interoperation instructions comprise voice operations;
the display determination module is specifically configured to:
determining a voice volume value corresponding to the voice operation, and determining a target position of a target user in a vehicle according to the voice volume value; wherein the target user is a user implementing the voice instruction;
and determining a target vehicle-mounted display from the candidate vehicle-mounted displays according to the target position and the incidence relation between the candidate position and the candidate vehicle-mounted display.
15. The apparatus of claim 13, wherein the data acquisition module is specifically configured to:
determining touch coordinate information corresponding to the touch operation in the target vehicle-mounted display;
determining a target page component from the candidate page components according to the touch coordinate information and the incidence relation between the coordinate information in the target vehicle-mounted display and the candidate page components;
and generating a data acquisition instruction according to the component identifier of the target page component.
16. The apparatus according to claim 14, wherein the data acquisition module is specifically configured to:
performing voice recognition on the voice information of the voice operation, and determining a target page component from candidate page components in the target vehicle-mounted display according to a voice recognition result;
and generating a data acquisition instruction according to the component identifier of the target page component.
17. The apparatus according to claim 15 or 16, wherein the data acquisition module is further configured to:
sending a data acquisition instruction carrying the component identifier to a server;
acquiring page data of a page to be displayed from the server; and the page to be displayed is a jump page corresponding to the target page component.
18. The apparatus of any of claims 10-17, wherein the page to be presented comprises an applet page.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method according to any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1-9.
CN202210639477.9A 2022-06-07 2022-06-07 Page display method and device, electronic equipment and medium Withdrawn CN115061762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210639477.9A CN115061762A (en) 2022-06-07 2022-06-07 Page display method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210639477.9A CN115061762A (en) 2022-06-07 2022-06-07 Page display method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115061762A true CN115061762A (en) 2022-09-16

Family

ID=83200533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210639477.9A Withdrawn CN115061762A (en) 2022-06-07 2022-06-07 Page display method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115061762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657991A (en) * 2022-12-09 2023-01-31 深圳曦华科技有限公司 Display screen control method of intelligent automobile, domain controller and related device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101319A (en) * 2016-07-27 2016-11-09 深圳市金立通信设备有限公司 Terminal and display control method thereof
CN109493871A (en) * 2017-09-11 2019-03-19 上海博泰悦臻网络技术服务有限公司 The multi-screen voice interactive method and device of onboard system, storage medium and vehicle device
CN109889877A (en) * 2019-03-29 2019-06-14 上海势航网络科技有限公司 Car multi-screen display control method and device
CN110171372A (en) * 2019-05-27 2019-08-27 广州小鹏汽车科技有限公司 Interface display method, device and the vehicle of car-mounted terminal
CN110908625A (en) * 2018-09-18 2020-03-24 阿里巴巴集团控股有限公司 Multi-screen display method, device, equipment, system, cabin and storage medium
CN110992946A (en) * 2019-11-01 2020-04-10 上海博泰悦臻电子设备制造有限公司 Voice control method, terminal and computer readable storage medium
CN111475075A (en) * 2020-04-01 2020-07-31 上海擎感智能科技有限公司 Vehicle-mounted screen control method, management system and computer-readable storage medium
CN111683276A (en) * 2020-06-16 2020-09-18 扬州航盛科技有限公司 Vehicle-mounted real-time multi-screen projection method based on android system
CN112309395A (en) * 2020-09-17 2021-02-02 广汽蔚来新能源汽车科技有限公司 Man-machine conversation method, device, robot, computer device and storage medium
CN113961156A (en) * 2020-07-20 2022-01-21 北京字节跳动网络技术有限公司 Multi-screen display method, device and system, electronic equipment and computer medium
CN114554274A (en) * 2020-11-25 2022-05-27 陕西重型汽车有限公司 WiFi-based vehicle-mounted terminal screen projection system and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101319A (en) * 2016-07-27 2016-11-09 深圳市金立通信设备有限公司 Terminal and display control method thereof
CN109493871A (en) * 2017-09-11 2019-03-19 上海博泰悦臻网络技术服务有限公司 The multi-screen voice interactive method and device of onboard system, storage medium and vehicle device
CN110908625A (en) * 2018-09-18 2020-03-24 阿里巴巴集团控股有限公司 Multi-screen display method, device, equipment, system, cabin and storage medium
CN109889877A (en) * 2019-03-29 2019-06-14 上海势航网络科技有限公司 Car multi-screen display control method and device
CN110171372A (en) * 2019-05-27 2019-08-27 广州小鹏汽车科技有限公司 Interface display method, device and the vehicle of car-mounted terminal
CN110992946A (en) * 2019-11-01 2020-04-10 上海博泰悦臻电子设备制造有限公司 Voice control method, terminal and computer readable storage medium
CN111475075A (en) * 2020-04-01 2020-07-31 上海擎感智能科技有限公司 Vehicle-mounted screen control method, management system and computer-readable storage medium
CN111683276A (en) * 2020-06-16 2020-09-18 扬州航盛科技有限公司 Vehicle-mounted real-time multi-screen projection method based on android system
CN113961156A (en) * 2020-07-20 2022-01-21 北京字节跳动网络技术有限公司 Multi-screen display method, device and system, electronic equipment and computer medium
CN112309395A (en) * 2020-09-17 2021-02-02 广汽蔚来新能源汽车科技有限公司 Man-machine conversation method, device, robot, computer device and storage medium
CN114554274A (en) * 2020-11-25 2022-05-27 陕西重型汽车有限公司 WiFi-based vehicle-mounted terminal screen projection system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657991A (en) * 2022-12-09 2023-01-31 深圳曦华科技有限公司 Display screen control method of intelligent automobile, domain controller and related device

Similar Documents

Publication Publication Date Title
EP2849169B1 (en) Messaging and data entry validation system and method for aircraft
EP2689969B1 (en) Image processing in image displaying device mounted on vehicle
CN110741431A (en) Cross-device handover
CN111694433A (en) Voice interaction method and device, electronic equipment and storage medium
US20150277848A1 (en) System and method for providing, gesture control of audio information
CN107451439B (en) Multi-function buttons for computing devices
CN113366820A (en) Controlling a remote device using a user interface template
US20110193810A1 (en) Touch type display apparatus, screen division method, and storage medium thereof
CN111688580B (en) Method and device for picking up sound by intelligent rearview mirror
CN111121814A (en) Navigation method, navigation device, electronic equipment and computer readable storage medium
KR20210108341A (en) Display verification method for web browser, device, computer equipment and storage medium
CN115061762A (en) Page display method and device, electronic equipment and medium
CN114360554A (en) Vehicle remote control method, device, equipment and storage medium
WO2019005245A1 (en) Accessing application features from within a graphical keyboard
EP4134812A2 (en) Method and apparatus of displaying information, electronic device and storage medium
CN111324202A (en) Interaction method, device, equipment and storage medium
CN111741444A (en) Display method, device, equipment and storage medium
CN114879923A (en) Multi-screen control method and device, electronic equipment and storage medium
CN114964295A (en) Navigation method, device and system and electronic equipment
CN111124185B (en) Control method and device of equipment, server and storage medium
CN114356083A (en) Virtual personal assistant control method and device, electronic equipment and readable storage medium
CN113448668A (en) Method and device for skipping popup window and electronic equipment
EP4027336B1 (en) Context-dependent spoken command processing
CN116521113A (en) Multi-screen control method and device and vehicle
CN116016578B (en) Intelligent voice guiding method based on equipment state and user behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220916