CN110716776A - Method for displaying user interface and vehicle-mounted terminal - Google Patents

Method for displaying user interface and vehicle-mounted terminal Download PDF

Info

Publication number
CN110716776A
CN110716776A CN201910808405.0A CN201910808405A CN110716776A CN 110716776 A CN110716776 A CN 110716776A CN 201910808405 A CN201910808405 A CN 201910808405A CN 110716776 A CN110716776 A CN 110716776A
Authority
CN
China
Prior art keywords
vehicle
function
user
preset
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910808405.0A
Other languages
Chinese (zh)
Inventor
吴思举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN201910808405.0A priority Critical patent/CN110716776A/en
Publication of CN110716776A publication Critical patent/CN110716776A/en
Priority to PCT/CN2020/112285 priority patent/WO2021037251A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method for displaying a user interface and a vehicle-mounted terminal, relates to the technical field of terminals, and can realize that the vehicle-mounted terminal switches and displays different user interfaces according to the scene where a vehicle is located and the current speed of the vehicle so as to ensure the driving safety of a user. The method comprises the following steps: if the vehicle is in a driving state, the vehicle-mounted terminal judges whether the vehicle meets a first preset condition; the first preset condition comprises that the vehicle is in a preset scene or the current speed is greater than a first preset speed; if the vehicle meets the first preset condition, the vehicle-mounted terminal displays a first user interface, and the first user interface does not display the first function; or the first function is displayed in the first user interface in a first preset mode, and the first preset mode is used for indicating that the first function cannot be operated by a user.

Description

Method for displaying user interface and vehicle-mounted terminal
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method for displaying a user interface and a vehicle-mounted terminal.
Background
With the development of technology, in-vehicle systems of touch control type have become the standard configuration of vehicles of various manufacturers in recent years. Compared with the traditional vehicle-mounted system based on physical controls, the vehicle-mounted system based on touch control can provide richer functions, such as conversation, navigation, video, music and the like. Different from a mobile interaction scene, a driving task in a vehicle-mounted scene is a main task, other tasks belong to secondary tasks, and a vehicle-mounted system is used on the premise of driving safety. However, because the touch screen of the vehicle-mounted system cannot provide tactile feedback, more visual resources of a user are occupied during driving compared with a physical control, and driving safety is affected.
Disclosure of Invention
The application provides a method for displaying a user interface and a vehicle-mounted terminal, which can realize that the vehicle-mounted terminal switches and displays different user interfaces according to the scene where a vehicle is located and the current speed of the vehicle so as to ensure the driving safety of a user.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a method of displaying a user interface, which may include: if the vehicle is in a driving state, the vehicle-mounted terminal judges whether the vehicle meets a first preset condition; the first preset condition comprises that the vehicle is in a preset scene or the current speed is greater than a first preset speed; if the vehicle meets the first preset condition, the vehicle-mounted terminal displays a first user interface, and the first user interface does not display the first function; or the first function is displayed in the first user interface in a first preset mode, and the first preset mode is used for indicating that the first function cannot be operated by a user.
The first function is a function of occupying more visual resources or cognitive resources of the user. Illustratively, the first function includes any one or more of: the system comprises an entertainment function, an information function, a dialing function in a call function, a contact searching function in the call function, an address searching function in a navigation function, a song searching function in a music function and a weather inquiry function in a weather function.
Therefore, by setting the preset scene and the first preset speed, the vehicle-mounted terminal can judge whether the current vehicle is in the preset scene or whether the current vehicle speed is greater than the first preset speed in the vehicle running process, and further determine whether the current vehicle state requires the user to pay high attention to vehicle driving. When the user needs to pay high attention to vehicle driving, the touch screen of the vehicle-mounted terminal does not display or display but does not allow the user to use a function that part of the touch screen occupies more visual resources or cognitive resources of the user, so that the user pays high attention to vehicle driving, and the driving safety of the user is ensured.
In a possible implementation manner, the first user interface displays the second function in a second preset manner, and the second preset manner is used for indicating that the user is allowed to operate the second function.
The second function is a function which occupies less visual resources or cognitive resources of the user. Illustratively, the second function includes any one or more of: a recent contact function of the call function, a recommended address function of the navigation function, a recent play list function in the music function, and a local real-time weather function in the weather function.
In one possible implementation manner, after the vehicle-mounted terminal displays the first user interface, the method further includes: and if the vehicle-mounted terminal detects the first operation of the user on the first user interface, the vehicle-mounted terminal displays a third user interface. The third user interface displays only the first function in a first preset mode. The first operation is any one of the following operations of a user on a touch screen of the vehicle-mounted terminal: click operation, sliding operation, preset pressing operation and preset gesture operation.
Therefore, when the current interface does not display some functions or displays all functions but some functions cannot be operated, the vehicle-mounted terminal can intensively display all functions occupying high visual resources and cognitive resources of the user on one interface through some simple operations, so that the user can quickly lock the functions which are required to be operated but hidden or not allowed to be operated, and the dispersed attention of the user for searching the functions is reduced.
In one possible implementation, the preset scenario includes any one or several of the following: the vehicle runs on a section with dense pedestrian flow, the vehicle runs on a section with high danger, the vehicle runs at night, the vehicle runs under severe weather conditions, and the vehicle runs on a section with speed limit.
In one possible implementation, the determining whether the vehicle is in a preset scene includes: the vehicle-mounted terminal judges whether the vehicle runs on the section with dense people flow according to at least one item of data: a vehicle running state, an image of a vehicle surroundings, a vehicle running route; and/or judging whether the vehicle runs on the high-risk road section or not by the vehicle-mounted terminal according to at least one of the following data: a vehicle travel route, a vehicle travel state, an image of a vehicle surroundings; and/or judging whether the vehicle runs at night by the vehicle-mounted terminal according to at least one of the following data: ambient light conditions of the vehicle, images of the vehicle's surroundings, real time; and/or the vehicle-mounted terminal judges whether the vehicle runs under the severe weather condition according to at least one of the following data: ambient light conditions of the vehicle, images of the surrounding environment, data of weather software; and/or judging whether the vehicle runs on the speed-limited road section or not by the vehicle-mounted terminal according to at least one of the following data: an image of the vehicle surroundings, current road information.
In a possible implementation manner, if the vehicle-mounted terminal detects a second operation of the user, the vehicle-mounted terminal starts a voice control mode; the second operation is to operate a first preset button or an icon of the first function, and is used for starting the first function; the voice control mode is used for voice interaction between the user and the vehicle-mounted terminal.
Therefore, the user can use some functions occupying more visual resources and cognitive resources of the user through the voice control mode, and the voice control operation does not occupy the excessive attention of the user, so that the use requirement of the user can be met on the premise of ensuring the driving safety of the user, and the user experience is improved.
In a possible implementation manner, the vehicle-mounted terminal starts the voice control mode, and specifically includes: the vehicle-mounted terminal starts a voice interaction function; the vehicle-mounted terminal prompts the user to operate the first function through the third operation according to the voice interaction function, and informs the user of an operation method of the third operation; wherein the third operation is an operation in a voice interaction form.
In one possible implementation manner, the first preset condition further includes: the vehicle is in a preset scene, and the current speed is greater than a second preset speed; and the second preset speed is the minimum speed limit corresponding to the preset scene.
Therefore, the speed limit of the preset scene is set, the first function does not need to be hidden or forbidden when the vehicle is in the preset scene and the current vehicle speed is slow, and the user experience is further improved on the premise of ensuring the driving safety of the user.
In a possible implementation manner, if it is determined that the vehicle does not meet the first preset condition, the vehicle-mounted terminal displays a second user interface, the second user interface displays the first function and the second function in a second preset manner, and the second preset manner is used for indicating that the user is allowed to operate the first function and the second function.
In a second aspect, an embodiment of the present application provides an in-vehicle terminal, where the electronic device may be an apparatus for implementing the method in the first aspect. The electronic device may include: one or more processors; a memory having instructions stored therein; the touch screen is used for detecting touch operation and displaying an interface; the instructions, when executed by the one or more processors, cause the electronic device to perform the method of displaying a user interface of the first aspect.
In a third aspect, the present application provides a vehicle-mounted terminal having a function of implementing the method for displaying a user interface of any one of the first aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a fourth aspect, an embodiment of the present application provides a computer storage medium, which includes computer instructions, and when the computer instructions are executed on a vehicle-mounted terminal, the vehicle-mounted terminal is caused to perform the method described in the first aspect and any one of the possible implementation manners.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform the method as described in the first aspect and any one of the possible implementations thereof.
In a sixth aspect, there is provided circuitry comprising processing circuitry configured to perform the method of displaying a user interface as in any one of the first aspects above.
In a seventh aspect, an embodiment of the present application provides a chip system, including at least one processor and at least one interface circuit, where the at least one interface circuit is configured to perform a transceiving function and send an instruction to the at least one processor, and when the at least one processor executes the instruction, the at least one processor performs a method for displaying a user interface as in the first aspect and any possible implementation manner thereof.
Drawings
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for displaying a user interface according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method for displaying a user interface according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a display user interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of yet another display user interface provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of yet another illustrative user interface provided by an embodiment of the present application;
fig. 16 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
The following describes a method for displaying a user interface and a vehicle-mounted terminal provided by an embodiment of the present application in detail with reference to the accompanying drawings.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified. "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
Vehicle terminals (also described as car machines) are mostly installed in the car center console, and can realize communication between people and cars and between cars and the outside (car-to-car).
As shown in fig. 1 (a), in some embodiments of the present application, the in-vehicle terminal 200 may establish a communication connection with the electronic device 100 by a wired or wireless manner.
The electronic device 100 in the present application may be, for example, a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), an intelligent watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an on-vehicle device, an intelligent automobile, an intelligent sound, a robot, and the like, and the specific form of the electronic device is not particularly limited in the present application.
Illustratively, the car machine 200 and the electronic device 100 may be interconnected by using mirrorlink standard, for example, so as to implement bidirectional control of the electronic device 100 and the car machine 200 for specific application software. Thus, in the driving process of the automobile, the user only needs to use the physical keys on the car machine 200, touch the controls or voice commands on the screen of the car machine 200 to control the electronic device 200, including receiving/dialing a call, listening to music of a mobile phone, navigating with the mobile phone, and the like, without looking at the screen of the electronic device 100, touching the screen of the electronic device 100, or operating the physical keys of the electronic device 100, but the mobile phone itself also has operability at this time.
At this time, the car machine 200 may receive data sent by the electronic device 100, where the data sent by the electronic device 100 includes but is not limited to: image information and the like inside or outside the vehicle acquired by the camera device; user voice detected by a built-in sound pickup device, other sounds in the surrounding environment, and the like; data detected by a sensor or the like, and the like. In this application, the car machine 200 may determine whether the car machine is in a preset scene, such as a severe weather or a traffic jam road, according to the data of the car machine 200 and the data acquired from the electronic device 100, so that the car machine 200 may perform a corresponding operation, such as displaying a corresponding interface or a voice broadcast prompt.
As shown in fig. 1 (b), in other embodiments of the present application, the car machine 200 may not establish a communication connection with the electronic device 100. At this time, the car machine 200 may detect a touch operation of the user through the touch screen; detecting the operation of a user on a physical key on a steering wheel; the vehicle machine 200 may acquire data by using a camera device and a sensing device of the vehicle itself, and then determine whether the vehicle is in a preset scene, so that the vehicle machine 200 may perform a corresponding operation, such as displaying a corresponding interface or a voice broadcast prompt.
Fig. 2 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
In the embodiment of the present application, the application processor may determine whether the user is in a driving state and whether the user is in a scene requiring high attention during driving, for example, according to data acquired from a sensor (for example, acceleration acquired by the acceleration sensor 180E) and the like. Thereby determining whether it is necessary to disable a part of the functions of the electronic apparatus 200 or control a part of the functions of the electronic apparatus 200 using voice. It is understood that some or all of the data processing operations may also be handled by a GPU, an NPU, or the like, which is not limited in this embodiment of the present application.
For example: the electronic device 100 captures an image of the surrounding environment of the vehicle through the camera 195, or the electronic device 100 receives the image of the surrounding environment of the vehicle transmitted by the electronic device 200, and the GPU or NPU of the electronic device 100 may be invoked for image analysis to determine whether the vehicle is in a high-risk road segment, or how the current weather condition is.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 121 may be used to store computer-executable program instructions, including instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card.
For example, fig. 3 shows a schematic structural diagram of the electronic device 200.
Electronic device 200 may include processor 210, memory 220, wireless communication module 230, speaker 240, microphone 250, display 260, camera 270, USB interface 280, and sensor module 290, among other things. The sensor module 290 may include a pressure sensor 290A, a magnetic sensor 290B, an acceleration sensor 290C, a temperature sensor 290D, a touch sensor 290E, an ambient light sensor 290F, a positioning system 290G, a rainfall sensor 290H, and the like. For example: the electronic device 200 may also determine whether the vehicle is in motion, i.e., whether the user is in a driving state, according to the data of the acceleration sensor 290C. Another example is: the electronic device 200 may determine whether the vehicle is in a thunderstorm weather or not according to the data of the rainfall sensor 290H, or the electronic device 200 may transmit the data of the rainfall sensor 290H to the electronic device 100 so that the electronic device 100 determines whether the vehicle is in a thunderstorm weather or not. Another example is: the electronic device 200 can determine whether the vehicle is in heavy fog, darkness, or not, based on the ambient light sensor 290F. Alternatively, the electronic device 200 may transmit the data of the ambient light sensor 290F to the electronic device 100 so that the electronic device 100 determines whether the vehicle is in a dense fog, dark, or the like environment.
Processor 210 may include one or more processing units, and the various processing units may be stand-alone devices or may be integrated within one or more processors.
In some embodiments of the present application, the electronic device 200 may obtain, for example, sensor data on the electronic device 200 or the electronic device 100 to determine the state and scene of the user, so as to determine to display the corresponding user interface.
In still other embodiments of the present application, a plurality of applications (e.g., music player, navigation application, telephony application, notification application, etc.) may be running on the electronic device 200, and the plurality of applications may communicate directly with the application server, such as: and receiving a new message or incoming call prompt sent by the application server, and sending corresponding operation performed on the user interface to the application server. The electronic device 200 may obtain data from one or more sensors or receive data from the electronic device 100, for example, and determine the status and scene of the user to determine whether to display the user interface after disabling some functions and/or operations.
The memory 220 may be used to store computer-executable program instructions. The internal memory 220 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phone book, etc.) created during use of the electronic device 200, and the like. Further, the memory 220 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the memory 220 and/or instructions stored in a memory provided in the processor.
The wireless communication module 230 may provide a solution for wireless communication applied on the electronic device 200, including WLAN, such as Wi-Fi network, bluetooth, NFC, IR, etc. The wireless communication module 230 may be one or more devices integrating at least one communication processing module.
In some embodiments of the present application, the electronic device 200 may establish a wireless communication connection with the electronic device 100 through the wireless communication module 230. Alternatively, the electronic device 200 may also establish a wired communication connection with the electronic device 100 through the USB interface 280, which is not limited in this embodiment.
In some embodiments of the present application, the electronic device 200 may capture an image of the surrounding environment of the vehicle through the camera 270, so that the electronic device 200 or the electronic device 100 performs image analysis to determine the current scene where the user is located, for example: whether the vehicle runs on a high-risk road section, and the like.
The functions of the speaker 240, the microphone 250, the display screen 260 and the like can refer to the related descriptions in the electronic device 100, and are not described herein.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The following describes the technical solution provided in the embodiment in detail, taking the example that the electronic device 100 is a mobile phone and the electronic device 200 is a car machine.
In the prior art, a touch control type vehicle-mounted system (vehicle machine) has gradually become a standard configuration of vehicles of various manufacturers, but because a touch screen of the vehicle-mounted system cannot provide tactile feedback, more visual resources of a user can be occupied during driving, so that the user cannot concentrate on driving, and driving safety is affected. In addition, when the user drives, the user can also choose to connect the mobile phone to the car machine, and the car machine interacts with the functions of the mobile phone, so that the use of the vehicle-mounted system is further enriched. During the driving of the vehicle, the user may pay attention to the display screen of the vehicle-mounted system to process the notification message received by the mobile phone or use the richer entertainment functions of the mobile phone. However, when the user drives the vehicle at a high speed or is in some specific scenes, such as a situation where the user drives the vehicle in a heavily pedestrian-laden road section, a situation where the user drives the vehicle through a high-risk road section, a situation where the user drives the vehicle at night, a situation where the user drives the vehicle in bad weather, or a situation where the user drives the vehicle in a speed-limited road section, the user should use more effort for focusing on the driving without being distracted. In other words, in some cases, a user interface capable of reducing visual resources and cognitive resources occupied by a vehicle display screen should be provided, so as to ensure driving safety.
Therefore, the embodiment of the application provides a method for displaying a user interface, after a vehicle machine starts to work, various scenes where a vehicle is located can be automatically identified, different user interfaces are displayed based on different scenes, and when the scene where the vehicle is located changes, the corresponding user interface can be displayed according to the changed scene. Wherein the displayed user interface may include a first user interface and a second user interface. The first user interface is a user interface which does not display part of functions or displays but cannot operate part of functions, and the second user interface is a user interface which displays all functions. The functions include some functions and some operations of some functions, wherein some functions may be presented in the form of mobile phone software (APP).
For example: the touch screen of the car machine may display a second user interface when it is recognized that the user is not driving.
For another example: recognizing that the user is driving but in some scenes where the user may not be paying too much attention or in some scenes where the user is allowed to perform some other operation with little visual or cognitive resources, i.e., the user is not paying a high degree of attention, the touch screen of the in-vehicle device may display a second user interface.
Another example is: and when the situation that the user is driving and is in a scene needing high concentration is recognized, the car machine touch screen displays the first user interface.
The method for displaying the user interface in the embodiment of the application can be pre-configured in the vehicle-mounted terminal, so that all the vehicle-mounted terminals display the user interface by adopting the method; or the vehicle-mounted terminal does not adopt the method for displaying the user interface after the user is set according to the selection of the user.
The specific scene requiring high concentration of attention includes, for example, when the user drives a vehicle to travel at a high speed (a first preset speed may be determined according to vehicle performance, and it is determined that the vehicle is traveling at a high speed when a real-time vehicle speed is greater than the first preset speed), when the user drives the vehicle to travel at a road section with dense people flow (e.g., a business center, a tourist attraction, a school road section, etc.), when the user drives the vehicle to travel through a high-risk road section (e.g., a continuous turn, an expressway, a tunnel, etc.), when the user drives the vehicle at night (e.g., may be determined according to a sunset time, determined according to illumination intensity, etc.), when the user drives the vehicle in severe weather (e.g., dense fog, ice and snow weather, thunderstorm weather, etc.), and when the user drives the vehicle at a speed-limited road section (e.g., a.
Sometimes, the car machine cannot accurately identify the scene where the user is located, and under the condition that the mobile phone is connected to the car machine, the recognition results of the mobile phone and the car machine can be combined, so that the scene where the user is located can be identified more accurately. For example, when it is recognized that the user is driving but not in a scene requiring high concentration, the touch screen of the car machine may display a user interface including all functions. And when the situation that the user needs to be highly concentrated is identified, the touch screen of the car machine displays a hidden or unallowed user interface for operating partial functions and/or after operation.
In some embodiments, it may be that the user starts to be in a scene that does not require high attention, and operates some functions on the second user interface of the touch screen of the in-vehicle device, and during the operation, the user enters the scene that requires high attention, and at this time, the in-vehicle device may immediately interrupt the current operation and switch to display the first user interface, so as to ensure driving safety. Or, the user interface can be switched after the user completes the current operation, so that the experience of the user can be guaranteed. In this regard, the embodiments of the present application are not particularly limited.
As shown in fig. 4, a flowchart of a method for switching a user interface provided in the embodiment of the present application is specifically as follows:
and S401, starting the vehicle machine to work.
Specifically, after the user starts the vehicle to start the driving mode, the vehicle machine starts to display the user interface of the touch screen, and at this time, the vehicle machine is judged to start to work. Generally, the user interface at this time is a second user interface (see fig. 6) that is displayed by default and contains all functions.
It should be noted that, in the embodiment of the present application, whether the mobile phone is connected to the car machine is not limited, that is, the mobile phone may be connected to the car machine or not. The step only considers whether the car machine starts to work or not, and does not need to consider the connection condition of the mobile phone and the car machine.
And under the condition that the mobile phone is not connected to the vehicle machine, the vehicle machine identifies the scene. Under the condition that the mobile phone is connected to the car machine, data of the mobile phone can be synchronized with the car machine, the car machine is combined with the data collected by the mobile phone to identify the scene where the user is located, and then the user interface corresponding to the scene is displayed more accurately. In other words, the car machine may undertake all or part of the data processing work of the mobile phone, and the division of work between the car machine and the mobile phone is not limited in the embodiment of the present application.
The following steps are explained by taking an example that the car machine is responsible for judging the user state and scene and determining the user interface displayed by the car machine touch screen.
S402, the vehicle machine determines whether the vehicle is in a driving state. If the vehicle is not in a driving state (i.e., is in an undriven state), step S403 is executed. If the vehicle is in the driving state, step S404 is executed.
In a possible implementation manner, information such as the speed per hour of the vehicle measured by the vehicle-mounted device may be obtained, and whether the vehicle is in a traveling state or not may be determined, so as to determine whether the vehicle is in a driving state or not. The real-time position of the vehicle machine can also be determined through a Global Positioning System (GPS) of the vehicle machine, and the moving speed of the vehicle machine can be calculated according to the real-time position. And then, judging whether the moving speed of the vehicle machine is greater than a threshold value. If the moving speed of the vehicle machine is greater than the threshold value, the vehicle can be considered to be in a driving state. If the moving speed of the vehicle machine is not greater than the threshold value, the vehicle can be considered not to be in a driving state.
Optionally, in a case where the mobile phone is connected to the car machine, the car machine may also obtain data of the mobile phone, and obtain a moving speed of the mobile phone through a sensor (for example, an acceleration sensor) provided in the mobile phone, so as to determine whether the vehicle is in a driven state.
And S403, displaying a second user interface on the touch screen of the car machine.
Specifically, when it is determined that the vehicle is not in a driving state, or when the user using the vehicle is not paying attention to the vehicle, the touch screen of the vehicle is available if the user can conveniently operate the vehicle. And displaying the first function and the second function in a second preset mode in the second user interface, wherein the second preset mode is used for indicating that the user is allowed to operate the first function and the second function. The first function and the second function form all functions displayed on the touch screen of the car machine, namely, the second user interface displays all functions for the user to select and use.
As shown in fig. 6, a user interface of a touch-controlled car machine is exemplarily shown, and the user interface includes a main menu 601 displayed on the left side and a display interface 602 displayed on the right side. The main menu 601 displays main functions included in the car machine, and the display content in the display interface 602 corresponds to each function in the left main menu 601 and displays different user interfaces respectively. Five functions of home page, navigation, talk, music and car control are exemplarily given in a main menu 601 of the user interface shown in fig. 6. Generally, the touch screen displays all the commonly used functions on the home interface (i.e. the display interface 602 displayed correspondingly after the "home" function on the main menu 601 is selected), as shown in fig. 6, the user interface displayed correspondingly on the display interface 602 after the "home" function on the main menu 601 is selected is exemplarily shown, that is, the home interface includes six commonly used functions of map, WeChat, contact, telephone, entertainment and weather.
It should be understood that the car machine should further include other main or common application functions that can be displayed on the car machine touch screen, and based on the touch operation, the current home page interface can be switched to display more common functions, which may be referred to in the prior art, and this embodiment of the present application is not specifically described.
And then, the user can click corresponding functions step by step to enter the submenu according to the requirements, so as to realize the final function. For example, when a user wants to perform a call operation using the car phone, the user can touch a phone icon to enter a call interface, and then can perform a dialing operation or a phone query operation to perform a phone call. Or as shown in fig. 7, the user clicks the call function on the interface of the main menu 601 on the left side of the screen, and the display interface 602 corresponds to a call interface, in which operations such as dialing operation, contact search, and browsing of the recent contact list (this interface is not shown in fig. 7, it is understood that this function interface can be accessed through relevant operations) can be provided to the user.
S404, the vehicle machine determines that the vehicle is in a preset scene or the current vehicle speed is greater than a first preset speed. If the vehicle-mounted device determines that the vehicle is in the preset scene or the current vehicle speed is greater than the first preset speed, step S405 is executed, and if the vehicle-mounted device determines that the vehicle is not in the preset scene and the current vehicle speed is less than or equal to the first preset speed, step S403 is executed.
Specifically, it may be determined that the vehicle is in a preset scene or the current vehicle speed is greater than a first preset speed as the first preset condition. And then judging that the vehicle machine touch screen displays a first user interface when the vehicle meets a first preset condition.
The preset scene may be, for example: when a user drives a vehicle to run on a road section with dense people flow (such as a business center, a tourist attraction, a school road section and the like), when the user drives the vehicle to pass through a high-risk road section (such as a continuous turn, an expressway, a tunnel and the like), when the user drives the vehicle to run at night (such as determined according to the time of falling mountain of the sun, determined according to the illumination intensity and the like), when the user drives the vehicle in severe weather (such as dense fog, ice and snow weather, thunderstorm weather and the like), when the user drives the vehicle on a speed-limiting road section (such as a downhill road section, a village section, a road intersection and the like) and the like.
Whether the current scene is the preset scene or not can be a conclusion obtained by analyzing and judging the current scene by the vehicle-mounted device according to the sensing data or other manners, and specific exemplary five preset scene judging methods are provided, which are described in detail below.
Presetting a first scene: the vehicle is in a section with dense people flow.
When judging whether the user drives the vehicle to travel on a road section with dense people flow, the data can be analyzed through sensor data in the vehicle and/or the mobile phone. The vehicle-mounted device judges whether the vehicle is in a crowded road section or not at least according to at least one of the following data: a running state of the vehicle, an image of the surroundings of the vehicle or a running route. For example: whether the vehicle is located on a crowded section of people can be determined according to the speed per hour of the vehicle. It is also possible to determine whether the vehicle is in a crowded place such as a business center, based on the position of the vehicle.
Optionally, the vehicle should have a capability of acquiring the vehicle speed in real time, and the vehicle-mounted device may acquire a change of the vehicle speed within a period of time, and if the vehicle speed is repeatedly changed within a preset time period and the vehicle speeds are small, that is, the vehicle is repeatedly braked within the preset time period, it may be determined that the vehicle is located in the road section with dense pedestrian flow, so that the vehicle cannot smoothly run.
Optionally, the navigation software includes road information of each road segment, for example: a certain business center road segment, a certain tourist attraction, a school road segment, etc., and generally, the people flow on these road segments is relatively dense. Therefore, the current position of the vehicle is determined according to the vehicle machine and/or the mobile phone, and whether the vehicle is in the road section with dense people flow is determined according to the current position and the road information of the position in the navigation software.
Optionally, the environment outside the vehicle can be identified by adopting an image analysis technology according to the image of the surrounding environment of the vehicle shot by the camera through the camera in the vehicle or the camera connected with the vehicle, and then whether the vehicle is in a road section with dense people flow is determined. In this case, a numerical value may be preset, for example, when the number of people around the vehicle exceeds 10, it is determined that the vehicle is in a section with dense people flow.
Presetting a scene II: vehicles are passing through high-risk road sections.
When judging whether the user drives the vehicle to pass through the high-risk road section, the data can be analyzed through sensor data in the vehicle and/or the mobile phone. Wherein, the vehicle machine judges whether the vehicle is in the high-risk road section according to at least one item of following data at least: a running route of the vehicle, a running state, or an image of the environment around the vehicle. For example: whether the vehicle is continuously turning may be determined according to the traveling route of the vehicle. It is also possible to determine whether the vehicle is located on the expressway based on the speed per hour of the vehicle. It is also possible to determine whether the vehicle is running on a wet road surface or the like, based on the running route and the running speed of the vehicle. The light around the vehicle can be judged through the light sensor, and whether the vehicle is located in a tunnel or not can be further determined.
Optionally, the environment outside the vehicle can be identified by adopting an image analysis technology according to the image of the surrounding environment of the vehicle shot by the camera or the camera connected with the vehicle, so as to determine whether the vehicle is in the high-risk road section.
Optionally, since the navigation software includes road condition data (e.g., sharp turns, tunnels, etc.) of each road segment, the current position of the vehicle may be determined according to the mobile phone or the vehicle, and whether the vehicle is in the high-risk road segment may be determined according to the current position and the road condition data of the position in the navigation software.
Presetting a scene three: the vehicle is in a night driving state.
When judging whether the user drives the vehicle at night, the vehicle can be analyzed through sensor data in the vehicle and/or the mobile phone. Wherein, the vehicle machine judges whether the vehicle is in a night driving state according to at least one item of data: ambient light conditions in which the vehicle is located, images of the vehicle surroundings, or real time. For example: and when the illumination intensity is lower than the threshold value and the duration exceeds the threshold value, judging that the user is in a night driving state. Or, the user drives the vehicle to run in the period from one hour after the sun falls to one hour before the sun rises, and then the user is determined to be in the night running state.
Optionally, an image analysis technology may be adopted according to an image of the surrounding environment captured by a camera of the vehicle and/or the mobile phone, so as to identify the illumination intensity of the environment where the vehicle is located, and determine whether the user is in a night driving state.
Optionally, light around the vehicle can be judged according to a light sensor of the vehicle machine, and light duration can be further combined, so that whether the user is in a night driving state or not can be more accurately determined.
Optionally, the starting time of night driving can be judged after the sun falling time according to the current time period and the position coordinate, and the ending time of night driving can be judged before the rising time of the sun. For example, if the rising time of the sun is 7:00 and the falling time of the sun is 18:00 in the current season of a certain place, the night driving time period can be defined as 19: 00-6: 00 of the next day, and the user drives at night in the time period.
Presetting a scene four: the current weather conditions are severe.
When judging whether the user drives the vehicle in severe weather, the data can be analyzed through the sensor data in the mobile phone and/or the vehicle machine. Wherein, the vehicle machine judges whether the vehicle is driven under the bad weather condition according to at least one item of data below: ambient light conditions in which the vehicle is located, images of the surrounding environment or data of weather software. For example: the environment around the vehicle, for example, the vehicle is in heavy fog, darkness, can be determined from a light sensor or the like in the vehicle. Whether the vehicle is in thunderstorm weather or not can be determined according to a rainfall sensor in the vehicle.
Optionally, the vehicle may further include a camera connected in the vehicle, and the environment outside the vehicle is identified by using an image analysis technology according to the image of the surrounding environment of the vehicle captured by the camera, so as to determine whether the vehicle is in the above-mentioned bad weather.
Optionally, the current weather condition may also be obtained through the weather application software, so as to determine whether the vehicle is in the above severe weather.
Presetting a scene five: the vehicle is in the speed-limiting road section.
When judging whether the user is in the speed-limiting road section to drive the vehicle, the vehicle machine at least judges according to at least one of the following data: an image of the vehicle surroundings, current road information. For example, the current road information can be determined through navigation software in a vehicle and/or a mobile phone, and then whether the user is in the speed-limiting road section is determined. General special road sections, such as: the vehicle speed limit is carried out on the downhill road section, the village road section, the road intersection and the like, so that the driving safety is ensured, the navigation software contains special road section information, and the vehicle machine can confirm whether the user is in the speed-limit road section according to the driving position of the user vehicle.
Optionally, the vehicle can be determined whether to be on the speed-limiting road section by using an image analysis technology according to the speed-limiting signs arranged on the road sections around the vehicle shot by the camera through the camera connected in the vehicle.
The vehicle machine can obtain corresponding data according to the method to further judge whether the vehicle is in a preset scene or in which preset scene or multiple preset scenes, and further judge whether the user needs to concentrate on the attention at the moment.
The vehicle speed is a large factor affecting the attention of the user, and when the vehicle speed is high, the user must concentrate on driving the vehicle, so that the first preset speed is set for judging whether the current vehicle is in a high-speed driving state.
The first preset speed is a threshold value determined according to vehicle performance or directly preset. When judging whether the user drives the vehicle to run at a high speed, the vehicle machine can acquire information such as the measured speed per hour of the vehicle in real time and compare the real-time speed with the first preset speed so as to determine whether the user drives the vehicle to run at a high speed. For example, the first preset speed may be set to 60km/h, and when the speed per hour of the user driving the vehicle is greater than 60km/h, the user is considered to be driving the vehicle at a high speed. Or, the performance of the vehicle used by the current user is better, and the first preset speed can be set to be 65 km/h; alternatively, the user currently using the vehicle may be a novice driver or the user is older, and a lower first preset speed, such as 50km/h, may be set. Of course, the first preset speed may also be set according to other situations, and this embodiment of the present application is not particularly limited.
Optionally, the real-time position of the vehicle machine can be determined through a GPS of the vehicle machine, and the moving speed of the vehicle machine is calculated according to the real-time position. And then, judging whether the moving speed of the vehicle machine is greater than a first preset speed or not. If the moving speed of the vehicle machine is greater than the first preset speed limit, the user can be considered to be driving the vehicle at a high speed. If the moving speed of the vehicle machine is not greater than the first preset speed, the user can be considered not to drive the vehicle to run at a high speed.
Optionally, in a case where the mobile phone is connected to the car machine, the car machine may also obtain data of the mobile phone, and obtain a moving speed of the mobile phone through a sensor (for example, an acceleration sensor) arranged in the mobile phone, so as to determine whether the user is driving the vehicle at a high speed.
It should be noted that, in an implementation manner of step S404, the car machine first determines whether the vehicle driven by the user is in a preset scene. If the vehicle is in the preset scene, step S405 may be directly executed, that is, when the vehicle is in the preset scene, the user should focus on driving the vehicle and cannot operate the vehicle machine with too much distraction. If the vehicle speed is not in the preset scene, whether the current vehicle speed is greater than the first preset speed or not, namely whether the vehicle is in a high-speed driving state or not, is further judged. If the current vehicle speed is greater than the first preset speed, step S405 is performed such that the user is highly focused on the driving of the vehicle when the vehicle is in a high speed driving state even though the vehicle is not in the preset scene. If the current vehicle speed is less than or equal to the first preset speed, step S403 is executed, that is, only when the vehicle is not in the preset scene and the current vehicle is not in the high-speed driving state, the vehicle-mounted device may display user interfaces with all functions for the user to use.
In another implementation manner of step S404, the vehicle machine first determines whether the current vehicle speed is greater than a first preset speed. If the current vehicle speed is greater than the first preset speed, that is, the current vehicle is in a high-speed driving state, step S405 is directly performed, that is, when the vehicle is in the high-speed driving state, the user must pay high attention to the driving of the vehicle. And if the current vehicle speed is less than or equal to the first preset speed, further judging whether the vehicle is in a preset scene. As described above, if it is in the preset scene, step S405 is performed. If the vehicle is not in the preset scene, that is, when the vehicle is not in the preset scene and the current vehicle is not in the high-speed driving state, step S403 is executed.
S405, displaying the first user interface by the touch screen of the car machine.
Specifically, the first user interface does not display the first function; or the first function is displayed in the first user interface in a first preset mode, and the first preset mode is used for indicating that the first function cannot be operated by a user. And displaying a second function in a second preset mode in the first user interface, wherein the second preset mode is used for indicating that the user is allowed to operate the second function. The first function is a function which occupies more visual resources and cognitive resources of the user during operation, the second function is a function other than the first function, and the second function occupies less visual resources or cognitive resources of the user. Further, functions in the embodiments of the present application refer to certain functions or certain operations of certain functions. The function can be presented in the form of an APP installed on the electronic device, and the operation can be an operation option provided by the APP installed on the electronic device.
The function of occupying more visual resources and cognitive resources of the user refers to an operation which can be performed after the user further thinks and needs visual participation. Such as: when dialing, a user needs to recall a number to be dialed in the mind and the sea, and needs to click corresponding numbers on the touch screen in sequence by using visual resources and ensure correct clicking, so that correct dialing operation can be realized, and a large amount of visual resources and cognitive resources of the user are occupied. For another example: when the navigation function is used, a user may search for a place, the place must be input on the touch screen by using visual resources, and to ensure the accuracy of the input, the navigation function may recommend several routes in some cases, and the user needs to further think to confirm the final route, so that more visual resources and cognitive resources of the user must be occupied. The occupation of visual resources and cognitive resources can divert a great deal of attention of the user, so that the user cannot concentrate on driving the vehicle. Therefore, when the vehicle is in a preset scene or the vehicle speed is fast, the vehicle must be driven with attention to ensure driving safety, and the vehicle machine needs to hide the first functions occupying more visual resources and cognitive resources of the user or enable the user to be incapable of operating the first functions so as to ensure driving safety.
Then, the first function includes any one or more of: the system comprises an entertainment function, an information function, a dialing function in a call function, a contact searching function in the call function, an address searching function in a navigation function, a song searching function in a music function and a weather inquiry function in a weather function. The second function comprises any one or more of: a recent contact function of the call function, a recommended address function of the navigation function, a recent play list function in the music function, and a local real-time weather function in the weather function.
It should be noted that the first function and the second function are only exemplary, and the first function and the second function may also include other functions. In addition, the division of the first function and the second function may be preconfigured by a manufacturer according to experience, or may be determined by the vehicle machine according to a use condition of a user, which is not specifically limited in this embodiment of the application.
In one possible implementation, two first user interfaces are shown by way of example in FIG. 8. In the 6 corresponding common functions on the display interface 602 after the user selects the "home page" function on the main menu 601, the wechat function and the entertainment function may occupy more visual resources or cognitive resources of the user when in use, for example, when the wechat information is replied, the user needs to concentrate the vision on the touch screen to read or type, and further cannot pay attention to the driving condition ahead, and the occurrence proportion of danger is increased in a preset scene or when the vehicle speed is fast. The WeChat function and the entertainment function are determined as the first function, and the map function, the contact function, the telephone function, and the weather function are determined as the second function. Therefore, as shown in (a) of fig. 8, the first user interface does not display the first function, but only displays the second function. Namely, the first function is hidden at this time, so that the user cannot operate the first function but can operate the second function. Alternatively, as shown in fig. 8 (b), the first function is made inoperable by the user in a manner of a draw fork on the main screen, but the second function allowing the user to operate is normally displayed. Of course, besides the way of drawing a fork, the first function cannot be operated by the user by adopting the way of graying the icon (the function cannot be used after clicking), which is not specifically limited in the embodiment of the present application.
In a possible implementation, as shown in fig. 9, a first user interface is further provided, in which a first function is a partial operation of a call function. Some functions on the touch screen of the car machine occupy more visual resources or cognitive resources of a user, but the functions are frequently used or necessary for the user, and the functions cannot be completely disabled (hidden or not allowed to operate) by adopting the method. Accordingly, operation of such functions that occupy more of the user's visual or cognitive resources may be disabled to focus the user on vehicle driving. For example, the call function is an essential function, and the car device cannot directly and completely disable the call function in order to avoid missing important contact messages, but can disable partial operations of the call function in order to ensure driving safety of the user. Referring to fig. 9, compared to the call interface shown in fig. 7, in the call interface shown in fig. 9, only the call record interface is reserved, and the dialing and contact searching functions are not displayed, so that the user can dial back the call to the contact who has recently called by clicking once. The communication contact is realized without occupying higher visual resources or cognitive resources. For another example, the navigation function is an essential function for the user to drive the vehicle, and cannot be prohibited from being used comprehensively. Therefore, the car machine only provides address recommendation on the navigation interface, and a user obtains an optimal driving route by clicking a corresponding address, so that the navigation function is realized. And the address search through the touch screen is forbidden, so that the driving safety of the user is ensured.
Therefore, the method for displaying the user interface provided by the application can be used for judging whether the current vehicle is in the preset scene or whether the current vehicle speed is greater than the first preset speed by the vehicle machine through setting the preset scene and the first preset speed in the vehicle running process, and if the current vehicle is determined to be in the preset scene or the current vehicle speed is greater than the first preset speed, the vehicle machine determines to display the first user interface. The method and the device have the advantages that the situation that in the prior art, all functions are displayed on the user interface constantly, so that the user needs to operate the functions which occupy high visual resources and cognitive resources in a distracted mode when needing to drive the vehicle with high concentration, and driving is not safe is caused.
In a possible implementation manner, the car machine may provide an operation entry for the user at the first user interface, so that the user may operate the first function in other manners that occupy less visual resources or cognitive resources of the user. For example, a first preset button may be provided to provide the operation entry for the user, and the first preset button is used to guide the user to perform a related operation in combination with a voice function of the car or mobile phone, so as to improve user experience. Illustratively, referring to fig. 10, a first preset button is added on the basis of the user interface shown in (b) of fig. 8. Referring to fig. 11, a first preset button is added to the user interface shown in fig. 9. The first preset button provides the user with access to such high-resource-occupation-type (occupying more visual and cognitive resources of the user) functions or operations, so that the user can use the disabled functions or operations. The user can operate the first operation preset button through clicking, dragging or other types of actions, when the screen detects that the user operates the first operation preset button, the user indicates that the user wants to use the high-resource occupation type functions or operations, and at this time, the car machine executes step S406, namely, the user can be guided to operate the functions in a voice broadcast mode.
It should be noted that the first preset button may be set on the first user interface by the car machine, and may also be set and displayed on the second user interface. Therefore, when the car machine touch screen displays all functions, other modes which occupy less visual resources or cognitive resources of the user can be provided for the user to operate the car machine touch screen. And the first preset button may be configured as a floating button for more convenient user operation. The embodiment of the present application is not particularly limited to the user interface and the display form displayed by the first preset button.
Optionally, the first preset button may not be set, and the first function displayed in the first preset manner may be combined with a voice function of the car or mobile phone to guide the user to perform a related operation. That is, the voice control mode may be entered when the first user interface detects user manipulation of the first function icon. Since the first user interface may be a user interface that does not display the first function as shown in (a) of fig. 8, or a user interface that displays the first function in a first preset manner and displays the second function in a second preset manner as shown in (b) of fig. 8. At this time, the user cannot directly lock the first function or needs to select a desired first function from all the functions, and the selection of the first function takes a certain attention of the user. Therefore, the car-mounted device touch screen may detect a first operation of the user at the first user interface, and when the first operation of the user is detected, as shown in fig. 12, the car-mounted device touch screen displays a third user interface in which all first functions are displayed in a first preset manner, and only the first functions are displayed at the current third user interface so that the user may quickly lock the required functions. The first operation is any one of the following double-click or sliding operations and the like of a user on a touch screen of the car machine: click operation, sliding operation, preset pressing gesture operation and preset gestures. The clicking operation can be single-clicking, double-clicking or multi-time continuous clicking operation of the user at the non-functional icon display area of the first user interface. The sliding operation is a longitudinal or oblique sliding operation distinguished from a mode of switching a display screen by a touch screen of an existing car machine, or a multi-finger sliding operation, and the like. The preset pressing gesture operation is a pressing gesture set by a user according to a use habit, such as single-finger or multi-finger pressing. The preset gesture operation is a gesture set by a user according to a use habit, such as "drawing a circle" and the like. The first operation is not particularly limited in the embodiment of the present application.
The detection of the second operation of the detection user on the first preset button or the first function icon is added to the touch screen of the car machine, so that the car machine can meet the requirement that the user uses certain functions or operations which are not allowed to be used in the current vehicle state, and the user experience is improved on the premise of ensuring safe driving.
S406, entering a voice control mode.
Specifically, after the car machine touch screen detects a second operation of the user, the car machine starts a voice interaction function. The second operation is to operate a first preset button or an icon of the first function, and is used for starting the first function. The voice control mode is used for voice interaction between a user and the vehicle machine. The car machine prompts the user to operate the first function through the third operation according to the voice interaction function, and informs the user of an operation method of the third operation. Wherein the third operation is an operation in a voice interaction form.
For example, after the user operates a first preset button or a first function icon on the first user interface shown in fig. 10, the user enters a voice control mode, and the user interface shown in fig. 13 or fig. 15 is displayed. For another example, after the user operates the first function icon on the third user interface shown in fig. 12, the voice control mode is entered, and the user interface shown in fig. 13 or fig. 15 is displayed. For another example, after the user operates the first preset button on the user interface shown in fig. 11, the user enters the voice control mode, and the user interface shown in fig. 14 is displayed.
Alternatively, when the first function is some functions, as shown in fig. 13, the first function is operated by a voice control mode. After the user selects the "home" function in the menu bar 601, the user operates the first preset button on the user interface correspondingly displayed on the display interface 602 to enter the voice control mode, and at this time, the in-vehicle device can perform voice broadcast of "risk exists in current operation" to prompt the user that the driving risk can be improved by the functions of this type. The car machine continues to voice-report "ask for what help is needed" to ask the user what operation needs to be performed at this time. The user may answer the action that he wants to perform, for example, as shown in fig. 13, the user may answer "view the latest WeChat message" by looking at the WeChat (information) message. Because the car machine can be without the user participation operation just can be with the pronunciation of turning into the characters and report, and then can realize this operation, so judge that the WeChat function that current user wanted to use can realize through speech control, judge that the user can use the WeChat function promptly. Therefore, the car machine can play the 'opened WeChat' through voice broadcasting. And then, playing the latest WeChat message 'mom message' when the user wants to check. At this time, the user can directly control the car machine through voice to execute the desired operation, for example, the car machine is used for information reply, and the user can directly command the car machine to reply for half an hour through voice. Finally, the car machine receives the command and broadcasts the reply content ' reply ' for half an hour ' in a voice mode, so that the user confirms the final result. If not, the user needs the voice control car machine to input the related content again. For example, the user may command the car machine to "current message is wrong", "reply" for half an hour ", so that the car machine can perform correct operation conveniently. If so, the user does not have to perform any more operations. Alternatively, the user may directly enter the voice control mode by operating the icon of the first function through the user interface shown in fig. 10 or 12. Through the operation, the user can operate the first function through other modes which occupy less visual resources or cognitive resources of the user, and the driving safety of the user is ensured while the user experience is improved.
Alternatively, when the first function is some operations of some functions, as shown in fig. 14, the first function is operated by a voice control mode. When the user finds that there is no contact, such as "mom", in the latest call record in the call interface shown in fig. 11, the user cannot dial, so that the user cannot make a call. The user can operate the first preset button in fig. 11 to present a user interface that can be voice-controlled as shown in fig. 14. At this moment, the car machine can carry out voice broadcast "ask for what help needs". The user answers the operation currently desired, such as "call mom". The car machine judges whether the operation which the current user wants to carry out can be finished through voice operation. The operation of dialing the phone can be directly finished by the car machine without manual control of a user. Therefore, the car machine can perform a dialing operation while voice-broadcasting "mom's home phone is being dialed" to inform the user that the currently desired operation is being performed. The requirements of customers are met while the driving safety is ensured.
Optionally, when the user performs the second operation to start the voice control mode to operate the first function, the vehicle may evaluate the first function to confirm whether the first function can be operated through the voice control mode. Such as a WeChat function, may operate using text and speech translation. For another example, some entertainment functions require occupying higher visual resources and cognitive resources of the user and cannot be operated by voice, and the user is prompted that such functions can increase driving risks and cannot be used.
Illustratively, see FIG. 15, for a prompt that the first function cannot be operated via the voice control mode. Some functions cannot be simply controlled by voice to achieve the purpose, and further thinking of a user is needed to achieve the final function, so that the functions are completely forbidden to be used. For example, after the user operates the first preset button or the first function icon on the user interface shown in fig. 10, the car machine may perform voice broadcast of "risk exists in current operation" to prompt the user that the driving risk may be increased by operating the first function at this time. The car machine continues to broadcast "ask what help is needed". The user may answer a desired operation, such as the entertainment function shown in fig. 10, and the user may voice-command the car machine to "turn on the entertainment function". At the moment, the car machine needs to judge whether the function started by the current user command can be operated through voice control, and the car machine can judge that the function cannot be used in the current state because the entertainment function needs manual participation of the user and cannot be operated simply by voice, so that voice broadcasting 'the current scene entertainment function has higher risk and cannot be operated' to prevent the user from using the function, and the driving safety of the user is ensured.
It should be noted that, after the user completes the current operation in the voice control mode, the car machine may display a corresponding user interface according to the current scene of the vehicle. And after the user finishes the current operation in the voice control mode, the vehicle-mounted device does not need to switch the user interface immediately according to the scene where the current vehicle is located, but keeps displaying the current user interface within a preset time period, so that the user can continue to operate the current function by using the voice control mode, and the user does not need to operate the first operation button again to enter the current function. For example, referring to fig. 13, 14 and 15, the current user interface includes a "microphone" icon, and the user can continue to use the current function to perform related operations by clicking the icon within a preset time period. For example, in the example of fig. 13, when the user confirms that the message returned by the in-vehicle device is correct, but then wants to continue to return to mom, the user may click the "microphone" icon to instruct the in-vehicle device to return to the corresponding content again. Or the 'microphone' icon can be clicked to command the car machine to 'view the next new message' or 'reply to colleague A' tomorrow me is not working ',' and the like. By adding the function to the user interface in the voice control mode, repeated operations of the user can be reduced, efficiency can be improved, and the user can be further enabled to focus attention on vehicle driving.
It is understood that the "microphone" icon is merely an exemplary representation, and other icons or other forms may be used to enable the car machine to satisfy the above functions, which is not specifically limited in the embodiment of the present application. Of course, the icons may not be set for realizing the continuous use of the voice control mode, and the car machine may be directly configured to be in the voice control mode, and after the current voice operation of the user is completed, the car machine is kept in the voice control mode within a preset time period, so that the user can directly use the voice to operate the function on the car machine touch screen. The embodiments of the present application are not particularly limited.
Optionally, in the step S404, when the in-vehicle device determines that the vehicle is in the preset scene, referring to fig. 5, step S405 does not need to be immediately executed, but step S407 is first executed to determine whether the current vehicle speed is greater than a second preset speed, where the second preset speed is a minimum speed limit corresponding to the preset scene. That is, step S407 may be executed, so that when the vehicle of the user is in the preset scene and the vehicle speed is low, all functions on the touch screen of the vehicle device may be used, and a better experience is provided for the user.
And S407, the vehicle machine determines that the current vehicle speed is greater than a second preset speed. If the vehicle machine determines that the current vehicle speed is greater than the second preset speed, step S405 is executed. If the vehicle machine determines that the current vehicle speed is less than or equal to the second preset speed, step S403 is executed.
Specifically, the second preset speed is a minimum speed limit in the speed limits corresponding to the preset scenes, different speed limits (second preset speeds) can be respectively and correspondingly set according to different preset scenes, and all the second preset speeds are combined to form a second preset speed set. When the road condition is complex, the user may be in several preset scenes at the same time. For example, when the vehicle runs through a high-risk road section at night, the vehicle machine determines that the vehicle is in two preset scenes, namely "night running" and "high-risk road section", two second preset speeds corresponding to the two current scenes need to be obtained according to a second preset speed set, and then a smaller second preset speed is obtained and is used as a judgment basis for the vehicle machine to display the user interface. When the user drives the vehicle in a preset scene, a second preset speed corresponding to the current scene can be obtained according to a second preset speed set, and the second preset speed is used as a judgment basis for displaying the user interface on the vehicle. When the current vehicle speed of the vehicle driven by the user is higher than the second preset speed corresponding to any scene, the vehicle speed is considered to be higher at this time, and high attention needs to be paid to the vehicle driven, that is, the step S407 determines to be yes. Of course, the speed limit corresponding to the preset scene may also be uniformly customized to a certain second preset speed, and when the current vehicle speed of the user is higher than the second preset speed, the current vehicle speed is determined to be yes. The second preset speed set and the second preset speed are preset by a manufacturer according to vehicle performance or preset by a vehicle machine according to user operation habits.
Illustratively, different speed limit limits (second preset speeds) are respectively set according to different preset scenes to form a second preset speed set. For example, when the vehicle runs on a road section with dense pedestrian flow, the second preset speed is set to be 30 km/h; when the vehicle runs at night, the second preset speed is set to be 40 km/h; when the high-risk road section runs, the second preset speed is set to be 15 km/h; when the vehicle runs in severe weather, the second preset speed is set to be 20 km/h; and when the speed limit is the road section A, setting the second preset speed to be 50% A, for example, when the current speed limit is 60km/h, setting the second preset speed to be 30 km/h. Then a second set of preset speeds 30, 40, 15, 20, 50% a km/h can be obtained. And then, the vehicle machine judges the second preset speeds respectively corresponding to the preset scenes of the current user, and further obtains the minimum second preset speed in all the second preset speeds corresponding to the scenes of the current vehicle. The current vehicle speed is greater than the minimum second preset speed, which indicates that the vehicle is currently in a scene requiring high attention, and further requires the user to concentrate the attention on driving the vehicle, so step S405 is executed, that is, the vehicle displays the first user interface hiding or not allowing the user to operate the first function.
For example, in the preset scenes, some scenes are not good for speed limit distinction, or the second preset speeds corresponding to different preset scenes are similar. Accordingly, a uniform second preset speed may be established for the preset scene. For example, the second preset speed is uniformly set to be 20km/h in the preset scene. At the moment, whether the current vehicle speed is greater than a second preset speed or not is judged according to the current vehicle speed of the vehicle driven by the user, and whether the user needs to pay high attention currently or not is further judged, namely, a corresponding user interface displayed by the vehicle machine is obtained.
In another possible implementation, since even in the same preset scenario, it is possible to make different second preset speeds under different conditions. The second preset speed is thus established in combination with different conditions of different users or different situation conditions of the vehicle performance.
Optionally, the driving proficiency of the user can influence the operation and use of the vehicle machine by the user. Therefore, the second preset speed of the corresponding scene can be set according to factors influencing the driving proficiency of the user, such as the driving age of the user. For example, for a novice driver, the vehicle is in a high-speed driving state when the vehicle speed exceeds 55km/h, and the driver cannot pay too much attention to operate the vehicle machine; however, for the old driver, the vehicle speed exceeding 65km/h cannot use much attention for the vehicle machine operation. In other words, the driving proficiency of the user can be determined according to the driving age of the user and other factors, and different second preset speeds are set for different scenes to form a second preset speed set. Finally, a different second set of preset speeds may be selected by the user or first configured by the manufacturer and then modified by annual inspection. Of course, the car machine may further configure the corresponding mode for the user according to the operation of the user. Wherein the different configuration modes are different second preset speed sets configured for different driving proficiency levels. For example: the user may be classified into three levels according to the driving proficiency of the user: class a, class B and class C. Then, the user of the level a is considered to be most skilled in driving, and the second preset speed corresponding to different preset scenes of the user of the level a can be set to be a higher speed; the driving proficiency of the class B user is a general level, the second preset speeds corresponding to different preset scenes of the class B user can be set to be a medium level, and the second speed set corresponding to the class B user can be used as a default second preset speed set, so that vehicles which cannot determine the driving proficiency of the user are all configured with the class B second preset speed set. The class C user may then use a second set of preset speeds set for novice drivers.
Optionally, the health condition or age of the user may also have a certain effect on driving safety. Therefore, the second preset speed of the corresponding scene may be set according to factors such as the age of the user, which affect the driving state of the user. For example, if the user has a heart disease and cannot better cope with the emergency, the second preset speed set of the user needs to be set lower to ensure the driving safety of the user. Alternatively, the user is older, which will make the response speed slower, and the second preset speed set of the user needs to be made slower. Certainly, the camera inside the vehicle or of the mobile phone can be used for capturing the facial expression of the user, an image analysis technology is further adopted, whether the current state of the user is abnormal or not is analyzed, if the state of the user is not good, the vehicle machine can set a lower second preset speed set, the situation in a preset scene can be responded by the user, and the driving safety of the user is guaranteed.
Optionally, the performance of the vehicle may also vary with respect to the handling capacity of the emergency situation. Therefore, the second preset speed sets corresponding to different preset scenes can be set according to different vehicle performances. For example, a vehicle with poor performance may have a poor braking function, and thus, in case of an emergency, the vehicle cannot brake in time to ensure the safety of the user. Therefore, a lower second preset speed set also needs to be formulated, so that the user can touch and operate all functions of the touch screen of the car machine at a lower car speed, and further, the attention of the user is ensured to be enough to deal with the emergency situation.
Of course, besides the above-mentioned cases, the setting of the second preset speed in different preset scenes may be influenced by other conditions of the user and the vehicle, and the embodiment of the present application is not particularly limited.
In one possible implementation, a first time threshold may also be set. For example, when the user interface is the first user interface, the condition that the car-mounted device touch screen displays the user interface including all functions may be met in a scene where the user is located at the next time, and then interface switching does not need to be performed immediately. And the user interface needs to be switched after the current state retention time exceeds the first time threshold. Because the driving condition of the vehicle on the road is complex, the vehicle speed may be temporarily reduced due to an emergency, and at this time, the user needs to pay more attention to observe the road condition for analysis, but cannot use too much visual resources and cognitive resources for user interface operation, so that the driving safety of the user can be further ensured by setting the first time threshold.
In a possible implementation manner, when the user does not complete the operation on the touch screen displaying the second user interface and the scene where the user is located is switched to the scene where the vehicle-mounted device is required to display the first user interface, the current operation of the user needs to be immediately interrupted to switch the user interfaces, so as to ensure the driving safety of the user. Or, the user interface switching operation can be performed after the user completes the current operation, but the user is prompted to have a driving risk in the current scene through voice broadcasting, so that the driving safety of the user is guaranteed while the user requirements are met.
Therefore, the method for displaying the user interface can set the second preset speed on the basis of setting the preset scene and the first preset speed, further can judge the user interface to be displayed according to the scene where the vehicle is located and the real-time speed of driving, and when the user needs to pay high attention, the vehicle machine can display the first user interface, so that the driving safety of the user is guaranteed. In addition, the setting of the second operation function of the user can be detected, so that the user can operate the first function in the first user interface in other modes without occupying visual resources or cognitive resources of the user, and the user experience is improved. The method and the device have the advantages that the situation that in the prior art, all functions are displayed on the user interface constantly, so that the user needs to operate the functions which occupy high visual resources and cognitive resources in a distracted mode when needing to drive the vehicle with high concentration, and driving is not safe is caused.
The embodiment of the present application further provides a vehicle-mounted terminal, which may include: the device comprises a starting unit, an acquisition unit, a processing unit, a display unit and the like. The units may perform the steps of the above embodiments to implement a method of displaying a user interface. For example, the starting unit is configured to support the vehicle-mounted terminal to execute starting of the vehicle-mounted device in S401 in fig. 4. And the acquisition unit is used for supporting the vehicle-mounted terminal to perform acquisition of vehicle speed, vehicle scene environment information, user voice information and the like in S402, S404 and S406 in FIG. 4. A processing unit for supporting the in-vehicle terminal to execute S402, S404, S406 in FIG. 4, S407 in FIG. 5, and the like. And a display unit for supporting the in-vehicle terminal to execute the display of the user interface of S403, S405 in fig. 4, and the like. And/or other processes for the schemes described herein.
The embodiment of the application also provides a vehicle-mounted terminal, which comprises one or more processors; a memory; and the touch screen is used for detecting touch operation and displaying an interface. The memory stores instructions, and when the instructions are executed by the one or more processors, the in-vehicle terminal is enabled to execute the steps in the above embodiments to implement the method for displaying the user interface in the above embodiments.
For example, when the vehicle-mounted terminal is the device shown in fig. 3, the processor in the vehicle-mounted terminal may be the processor 210 shown in fig. 3, the memory in the vehicle-mounted terminal may be the memory 220 shown in fig. 3, and the touch screen in the vehicle-mounted terminal may be a combination of the display screen 260 and the touch sensor 290E shown in fig. 3.
Embodiments of the present application also provide a chip system, as shown in fig. 16, which includes at least one processor 1601 and at least one interface circuit 1602. The processor 1601 and the interface circuit 1602 may be interconnected by a line. For example, the interface circuit 1602 may be used to receive signals from other devices (e.g., a memory of the electronic apparatus 200). Also for example, the interface circuit 1602 may be used to send signals to other devices, such as the processor 1601. Illustratively, the interface circuit 1602 may read instructions stored in memory and send the instructions to the processor 1601. When executed by the processor 1601, the instructions may cause the electronic device to perform the steps performed by the electronic device 200 (e.g., a car machine) in the above embodiments. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
The embodiment of the present application further provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on the vehicle-mounted terminal, the vehicle-mounted terminal is enabled to execute the relevant method steps to implement the method for displaying the user interface in the foregoing embodiment.
The embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the above related steps, so as to implement the method for displaying a user interface in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the device can execute the method for displaying the user interface in the method embodiments.
The vehicle-mounted terminal, the chip, the computer storage medium, the computer program product, or the chip provided in the embodiment of the present application are all configured to execute the corresponding method provided above, and therefore, the beneficial effects that can be achieved by the vehicle-mounted terminal, the chip, the computer storage medium, the computer program product, or the chip can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the embodiments provided in the present application, it should be understood that the disclosed method can be implemented in other ways. For example, the above-described embodiments of the vehicle terminal are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of modules or units through some interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method for displaying a user interface is applied to a vehicle-mounted terminal, and is characterized by comprising the following steps:
if the vehicle is in a driving state, the vehicle-mounted terminal judges whether the vehicle meets a first preset condition; the first preset condition comprises that the vehicle is in a preset scene or the current speed is greater than a first preset speed;
if the vehicle meets a first preset condition, the vehicle-mounted terminal displays a first user interface, and the first user interface does not display a first function; or the first function is displayed in the first user interface in a first preset mode, and the first preset mode is used for indicating that the first function cannot be operated by a user.
2. The method for displaying a user interface of claim 1, wherein the first user interface displays a second function in a second preset manner, and the second preset manner is used for indicating that a user is allowed to operate the second function.
3. The method for displaying a user interface according to claim 1, wherein after the in-vehicle terminal displays the first user interface, the method further comprises:
if the vehicle-mounted terminal detects the first operation of the user on the first user interface, the vehicle-mounted terminal displays a third user interface; displaying the first function in the third user interface only in the first preset mode; the first operation is any one of the following operations of the user on the touch screen of the vehicle-mounted terminal: click operation, sliding operation, preset pressing operation and preset gesture operation.
4. The method of displaying a user interface according to any one of claims 1 to 3,
the preset scene comprises any one or any several of the following items: the vehicle runs on a section with dense pedestrian flow, the vehicle runs on a section with high risk, the vehicle runs at night, the vehicle runs under severe weather conditions, and the vehicle runs on a section with speed limit.
5. The method of displaying a user interface according to any one of claims 1-4, further comprising:
if the vehicle-mounted terminal detects a second operation of the user, the vehicle-mounted terminal starts a voice control mode; the second operation is to operate a first preset button or an icon of the first function, and is used for starting the first function; and the voice control mode is used for voice interaction between the user and the vehicle-mounted terminal.
6. The method for displaying the user interface according to claim 5, wherein the vehicle-mounted terminal starts a voice control mode, comprising:
the vehicle-mounted terminal starts a voice interaction function;
the vehicle-mounted terminal prompts a user to operate the first function through a third operation according to the voice interaction function, and informs the user of an operation method of the third operation; wherein the third operation is an operation in a voice interaction form.
7. The method of displaying a user interface according to any one of claims 1-6, wherein the first preset condition further includes:
the vehicle is in the preset scene, and the current speed is greater than a second preset speed; and the second preset speed is the minimum speed limit corresponding to the preset scene.
8. The method of displaying a user interface according to any one of claims 1-7, further comprising:
and if the vehicle is determined not to meet the first preset condition, the vehicle-mounted terminal displays a second user interface, the first function and a second function are displayed in the second user interface in a second preset mode, and the second preset mode is used for indicating that a user is allowed to operate the first function and the second function.
9. The method of displaying a user interface according to any one of claims 1-8, wherein determining whether the vehicle is in a preset scene comprises:
the vehicle-mounted terminal judges whether the vehicle runs on the section with dense people flow according to at least one item of data: a vehicle driving state, an image of a surrounding environment of the vehicle, a vehicle driving route;
and/or the vehicle-mounted terminal judges whether the vehicle runs on the high-risk road section according to at least one of the following data: a vehicle travel route, a vehicle travel state, an image of the vehicle surroundings;
and/or the vehicle-mounted terminal judges whether the vehicle runs at night according to at least one of the following data: the ambient light condition of the vehicle, the image of the surrounding environment of the vehicle, and the real-time;
and/or the vehicle-mounted terminal judges whether the vehicle runs under the severe weather condition according to at least one of the following data: the ambient light condition of the vehicle, the image of the surrounding environment and the data of weather software;
and/or the vehicle-mounted terminal judges whether the vehicle runs on the speed-limited road section according to at least one of the following data: an image of the vehicle surroundings, current road information.
10. The method of displaying a user interface according to any one of claims 1 to 9,
the first function comprises any one or more of the following: the system comprises an entertainment function, an information function, a dialing function in a call function, a contact searching function in the call function, an address searching function in a navigation function, a song searching function in a music function and a weather inquiry function in a weather function.
11. The method of displaying a user interface according to any one of claims 2 to 10,
the second function comprises any one or more of: a recent contact function of the call function, a recommended address function of the navigation function, a recent play list function in the music function, and a local real-time weather function in the weather function.
12. A vehicle-mounted terminal characterized by comprising:
one or more processors;
a memory having instructions stored therein;
the touch screen is used for detecting touch operation and displaying a user interface;
the instructions, when executed by the one or more processors, cause the in-vehicle terminal to perform the method of displaying a user interface of any of claims 1-11.
13. A computer storage medium comprising computer instructions that, when run on a vehicle terminal, cause the vehicle terminal to perform the method of displaying a user interface of any one of claims 1-11.
14. A computer program product, characterized in that it causes a computer to carry out the method of displaying a user interface according to any one of claims 1-11, when said computer program product is run on the computer.
15. A chip system comprising at least one processor and at least one interface circuit, the at least one interface circuit being configured to perform a transceiving function and to transmit an instruction to the at least one processor, the at least one processor performing the method of displaying a user interface according to any one of claims 1 to 11 when the at least one processor executes the instruction.
CN201910808405.0A 2019-08-29 2019-08-29 Method for displaying user interface and vehicle-mounted terminal Pending CN110716776A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910808405.0A CN110716776A (en) 2019-08-29 2019-08-29 Method for displaying user interface and vehicle-mounted terminal
PCT/CN2020/112285 WO2021037251A1 (en) 2019-08-29 2020-08-28 Method for displaying user interface, and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910808405.0A CN110716776A (en) 2019-08-29 2019-08-29 Method for displaying user interface and vehicle-mounted terminal

Publications (1)

Publication Number Publication Date
CN110716776A true CN110716776A (en) 2020-01-21

Family

ID=69209502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910808405.0A Pending CN110716776A (en) 2019-08-29 2019-08-29 Method for displaying user interface and vehicle-mounted terminal

Country Status (2)

Country Link
CN (1) CN110716776A (en)
WO (1) WO2021037251A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369709A (en) * 2020-04-03 2020-07-03 中信戴卡股份有限公司 Driving scene determination method, device, computer, storage medium and system
CN111746401A (en) * 2020-06-29 2020-10-09 广州小鹏车联网科技有限公司 Interaction method based on three-dimensional parking and vehicle
CN111899545A (en) * 2020-07-29 2020-11-06 Tcl通讯(宁波)有限公司 Driving reminding method and device, storage medium and mobile terminal
CN112078520A (en) * 2020-09-11 2020-12-15 广州小鹏汽车科技有限公司 Vehicle control method and device
CN112230764A (en) * 2020-09-27 2021-01-15 中国人民解放军空军特色医学中心 Function switching method and device for tactile perception carrier and electronic equipment
CN112389198A (en) * 2020-11-17 2021-02-23 广州小鹏汽车科技有限公司 Display control method, display control device, vehicle, and storage medium
WO2021037251A1 (en) * 2019-08-29 2021-03-04 华为技术有限公司 Method for displaying user interface, and vehicle-mounted terminal
WO2022052344A1 (en) * 2020-09-11 2022-03-17 广州橙行智动汽车科技有限公司 Vehicle control method and apparatus
CN114368288A (en) * 2022-01-05 2022-04-19 一汽解放汽车有限公司 Display control method and device of vehicle-mounted terminal, computer equipment and storage medium
CN114446083A (en) * 2021-12-31 2022-05-06 珠海华发新科技投资控股有限公司 Intelligent community service system
CN114461330A (en) * 2022-02-11 2022-05-10 腾讯科技(深圳)有限公司 Display control method and related device of vehicle-mounted terminal
WO2022121586A1 (en) * 2020-12-11 2022-06-16 广州橙行智动汽车科技有限公司 Vehicle control card interaction method and apparatus adapted to a vehicle component
WO2022134106A1 (en) * 2020-12-25 2022-06-30 华为技术有限公司 Central control screen display method and related device
CN115016361A (en) * 2022-07-01 2022-09-06 中国第一汽车股份有限公司 Vehicle-mounted unmanned aerial vehicle control method and device, electronic equipment and medium
CN115914006A (en) * 2022-11-01 2023-04-04 长城汽车股份有限公司 Method and device for processing network information of vehicle, vehicle and electronic device
WO2023179435A1 (en) * 2022-03-25 2023-09-28 华为技术有限公司 Adaptive configuration method for vehicle-mounted application, and vehicle-mounted terminal
WO2023207116A1 (en) * 2022-04-27 2023-11-02 华为技术有限公司 App display method and device, and vehicle-mounted terminal
WO2024027550A1 (en) * 2022-07-30 2024-02-08 华为技术有限公司 Application control method for vehicle central control device, and related apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102770832A (en) * 2010-02-26 2012-11-07 诺基亚公司 Method and apparatus for providing cooperative enablement of user input options
US20150099495A1 (en) * 2012-10-16 2015-04-09 Excelfore Corporation System and Method for Monitoring Apps in a Vehicle or in a Smartphone to Reduce Driver Distraction
CN105450847A (en) * 2014-09-18 2016-03-30 福特全球技术公司 Method and apparatus for selective mobile application lockout
CN107380096A (en) * 2016-05-17 2017-11-24 谷歌公司 Application when operating vehicle performs

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5194651B2 (en) * 2007-08-31 2013-05-08 パナソニック株式会社 In-vehicle display device
CN104750379B (en) * 2013-12-30 2018-08-21 上海博泰悦臻网络技术服务有限公司 The method for displaying user interface and device of onboard system
CN105321515A (en) * 2014-06-17 2016-02-10 中兴通讯股份有限公司 Vehicle-borne application control method of mobile terminal, device and terminal
CN106445296B (en) * 2016-09-27 2021-09-28 奇瑞汽车股份有限公司 Method and device for displaying vehicle-mounted application program icons
US10490188B2 (en) * 2017-09-12 2019-11-26 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for language selection
CN108600519A (en) * 2018-03-31 2018-09-28 广东欧珀移动通信有限公司 Incoming-call control method and Related product
CN110716776A (en) * 2019-08-29 2020-01-21 华为终端有限公司 Method for displaying user interface and vehicle-mounted terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102770832A (en) * 2010-02-26 2012-11-07 诺基亚公司 Method and apparatus for providing cooperative enablement of user input options
US20150099495A1 (en) * 2012-10-16 2015-04-09 Excelfore Corporation System and Method for Monitoring Apps in a Vehicle or in a Smartphone to Reduce Driver Distraction
CN105450847A (en) * 2014-09-18 2016-03-30 福特全球技术公司 Method and apparatus for selective mobile application lockout
CN107380096A (en) * 2016-05-17 2017-11-24 谷歌公司 Application when operating vehicle performs

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021037251A1 (en) * 2019-08-29 2021-03-04 华为技术有限公司 Method for displaying user interface, and vehicle-mounted terminal
US11691625B2 (en) 2020-04-03 2023-07-04 Citic Dicastal Co., Ltd. Driving scene determining method and apparatus, computer, storage medium, and system
CN111369709A (en) * 2020-04-03 2020-07-03 中信戴卡股份有限公司 Driving scene determination method, device, computer, storage medium and system
CN111746401A (en) * 2020-06-29 2020-10-09 广州小鹏车联网科技有限公司 Interaction method based on three-dimensional parking and vehicle
CN111899545A (en) * 2020-07-29 2020-11-06 Tcl通讯(宁波)有限公司 Driving reminding method and device, storage medium and mobile terminal
CN111899545B (en) * 2020-07-29 2021-11-16 Tcl通讯(宁波)有限公司 Driving reminding method and device, storage medium and mobile terminal
CN112078520A (en) * 2020-09-11 2020-12-15 广州小鹏汽车科技有限公司 Vehicle control method and device
WO2022052344A1 (en) * 2020-09-11 2022-03-17 广州橙行智动汽车科技有限公司 Vehicle control method and apparatus
CN112230764A (en) * 2020-09-27 2021-01-15 中国人民解放军空军特色医学中心 Function switching method and device for tactile perception carrier and electronic equipment
CN112389198A (en) * 2020-11-17 2021-02-23 广州小鹏汽车科技有限公司 Display control method, display control device, vehicle, and storage medium
WO2022121586A1 (en) * 2020-12-11 2022-06-16 广州橙行智动汽车科技有限公司 Vehicle control card interaction method and apparatus adapted to a vehicle component
WO2022134106A1 (en) * 2020-12-25 2022-06-30 华为技术有限公司 Central control screen display method and related device
CN114446083A (en) * 2021-12-31 2022-05-06 珠海华发新科技投资控股有限公司 Intelligent community service system
CN114368288A (en) * 2022-01-05 2022-04-19 一汽解放汽车有限公司 Display control method and device of vehicle-mounted terminal, computer equipment and storage medium
CN114368288B (en) * 2022-01-05 2023-09-12 一汽解放汽车有限公司 Display control method and device of vehicle-mounted terminal, computer equipment and storage medium
CN114461330A (en) * 2022-02-11 2022-05-10 腾讯科技(深圳)有限公司 Display control method and related device of vehicle-mounted terminal
WO2023179435A1 (en) * 2022-03-25 2023-09-28 华为技术有限公司 Adaptive configuration method for vehicle-mounted application, and vehicle-mounted terminal
WO2023207116A1 (en) * 2022-04-27 2023-11-02 华为技术有限公司 App display method and device, and vehicle-mounted terminal
CN115016361A (en) * 2022-07-01 2022-09-06 中国第一汽车股份有限公司 Vehicle-mounted unmanned aerial vehicle control method and device, electronic equipment and medium
WO2024027550A1 (en) * 2022-07-30 2024-02-08 华为技术有限公司 Application control method for vehicle central control device, and related apparatus
CN115914006A (en) * 2022-11-01 2023-04-04 长城汽车股份有限公司 Method and device for processing network information of vehicle, vehicle and electronic device

Also Published As

Publication number Publication date
WO2021037251A1 (en) 2021-03-04

Similar Documents

Publication Publication Date Title
CN110716776A (en) Method for displaying user interface and vehicle-mounted terminal
CN113163470B (en) Method for identifying specific position on specific route and electronic equipment
CN113905179B (en) Method for switching cameras by terminal and terminal
CN110910872B (en) Voice interaction method and device
CN110138959B (en) Method for displaying prompt of human-computer interaction instruction and electronic equipment
WO2020244622A1 (en) Notification prompt method, terminal and system
CN111724775B (en) Voice interaction method and electronic equipment
CN112861638A (en) Screen projection method and device
CN112397062A (en) Voice interaction method, device, terminal and storage medium
WO2021000817A1 (en) Ambient sound processing method and related device
CN111368765A (en) Vehicle position determining method and device, electronic equipment and vehicle-mounted equipment
CN113434643A (en) Information recommendation method and related equipment
CN114466107A (en) Sound effect control method and device, electronic equipment and computer readable storage medium
CN112923943A (en) Auxiliary navigation method and electronic equipment
WO2024001940A1 (en) Vehicle searching method and apparatus, and electronic device
CN115641867B (en) Voice processing method and terminal equipment
CN117864147A (en) Driving state detection method and related equipment
CN113890929B (en) Method and device for switching audio output channel and electronic equipment
CN116032942A (en) Method, device, equipment and storage medium for synchronizing cross-equipment navigation tasks
CN114765768A (en) Network selection method and equipment
CN115695636B (en) Intelligent voice interaction method and electronic equipment
WO2023098467A1 (en) Voice parsing method, electronic device, readable storage medium, and chip system
WO2023241482A1 (en) Man-machine dialogue method, device and system
WO2023104075A1 (en) Navigation information sharing method, electronic device, and system
CN116954770A (en) Display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200121