WO2021037251A1 - 一种显示用户界面的方法及车载终端 - Google Patents
一种显示用户界面的方法及车载终端 Download PDFInfo
- Publication number
- WO2021037251A1 WO2021037251A1 PCT/CN2020/112285 CN2020112285W WO2021037251A1 WO 2021037251 A1 WO2021037251 A1 WO 2021037251A1 CN 2020112285 W CN2020112285 W CN 2020112285W WO 2021037251 A1 WO2021037251 A1 WO 2021037251A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- function
- user
- preset
- user interface
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Definitions
- This application relates to the field of terminal technology, and in particular to a method for displaying a user interface and a vehicle-mounted terminal.
- touch-sensitive in-vehicle systems have gradually become the standard configuration of vehicles of various manufacturers.
- touch-based vehicle systems can provide richer functions, such as calling, navigation, video, and music.
- the driving task in the car scene is the main task, and the other tasks are all secondary tasks.
- the car system should be used under the premise of driving safety.
- the touch screen of the in-vehicle system cannot provide tactile feedback, compared with physical controls, it will occupy more visual resources of the user during driving, which will affect driving safety.
- the present application provides a method for displaying a user interface and a vehicle-mounted terminal, which can switch and display different user interfaces according to the scene of the vehicle and the current speed of the vehicle, so as to ensure the safety of the user when driving.
- the present application provides a method for displaying a user interface.
- the method may include: if the vehicle is in a driving state, the on-board terminal determines whether the vehicle meets a first preset condition; the first preset condition includes that the vehicle is in a preset scene or The current vehicle speed is greater than the first preset speed; if it is determined that the vehicle meets the first preset condition, the vehicle-mounted terminal displays the first user interface, and the first user interface does not display the first function; or, the first user interface uses the first preset mode The first function is displayed, and the first preset mode is used to indicate that the first function cannot be operated by the user.
- the first function is a function that occupies more visual resources or cognitive resources of the user.
- the first function includes any one or more of the following: entertainment function, information function, dialing function in call function, contact search function in call function, address search function in navigation function, music function Search song function, weather query function in weather function.
- the vehicle-mounted terminal can determine whether the current vehicle is in the preset scene or whether the current vehicle speed is greater than the first preset speed during the driving process, and then determine whether the current vehicle state requires
- the user is highly focused on driving the vehicle.
- the on-board terminal touch screen will not display or display but does not allow the user to use some of the functions that occupy more visual or cognitive resources of the user, so that the user can concentrate on the driving of the vehicle, ensuring Users drive safely.
- the second function is displayed in a second preset manner in the first user interface, and the second preset manner is used to indicate that the user is allowed to operate the second function.
- the second function is a function that occupies less visual resources or cognitive resources of the user.
- the second function includes any one or more of the following: the recent contact function of the call function, the recommended address function of the navigation function, the recent playlist function in the music function, and the local real-time weather function in the weather function.
- the method further includes: if the vehicle-mounted terminal detects the user's first operation on the first user interface, displaying the third user interface on the vehicle-mounted terminal. Only the first function is displayed in the first preset manner in the third user interface.
- the first operation is any one of the following operations on the touch screen of the vehicle terminal by the user: a tap operation, a sliding operation, a preset pressing operation, and a preset gesture operation.
- the current interface does not display some functions or displays all functions but some functions cannot be operated, some simple operations can be used to make the vehicle-mounted terminal display all the functions that occupy the user's visual resources and cognitive resources in a centralized manner.
- the interface allows the user to quickly lock the functions that they want to operate but are hidden or not allowed to operate, reducing the user's distracting attention from the search function.
- the preset scene includes any one or several of the following: a vehicle is driving on a densely populated road, a vehicle is driving on a high-risk road, a vehicle is driving at night, a vehicle is driving under severe weather conditions, and a vehicle is driving On the speed limit section.
- judging whether the vehicle is in a preset scene includes: the vehicle-mounted terminal judges whether the vehicle is driving on a crowded road section based on at least one of the following data: vehicle driving state, images of the surrounding environment of the vehicle, and vehicle driving route; And/or, the vehicle-mounted terminal judges whether the vehicle is driving on a high-risk road section based on at least one of the following data: vehicle driving route, vehicle driving state, and images of the surrounding environment of the vehicle; and/or, the vehicle-mounted terminal judges whether the vehicle is driving based on at least one of the following data Driving at night: the ambient light condition of the vehicle, the image of the surrounding environment of the vehicle, real-time time; and/or, the on-board terminal judges whether the vehicle is driving under severe weather conditions based on at least one of the following data: the ambient light condition of the vehicle, the surrounding environment Image, weather software data; and/or, the vehicle-mounted terminal judges whether the vehicle is driving on the speed limit road section based on at least one of the following data: image of the vehicle driving state, images of the
- the vehicle-mounted terminal if the vehicle-mounted terminal detects the second operation of the user, the vehicle-mounted terminal starts the voice control mode; wherein, the second operation is to operate the first preset button or the icon of the first function, which is used to start The first function; the voice control mode is used for voice interaction between the user and the vehicle-mounted terminal.
- the user can use some functions that occupy more visual and cognitive resources of the user through the voice control mode, and the voice control operation will not occupy too much attention of the user. Therefore, the user can be protected on the premise of driving safety. Meet the needs of users and improve user experience.
- the vehicle-mounted terminal activates the voice control mode, which specifically includes: the vehicle-mounted terminal activates the voice interaction function; the vehicle-mounted terminal prompts the user to operate the first function through the third operation according to the voice interaction function, and informs the user of the third operation The operation method; where the third operation is an operation in the form of voice interaction.
- the first preset condition further includes: the vehicle is in a preset scene and the current vehicle speed is greater than a second preset speed; wherein the second preset speed is the minimum speed limit corresponding to the preset scene.
- the first function does not need to be hidden or disabled, and the user experience is further improved on the premise of ensuring the user's driving safety.
- the vehicle-mounted terminal displays a second user interface, and the second user interface displays the first function and the second function in a second preset manner.
- the second preset mode is used to indicate that the user is allowed to operate the first function and the second function.
- an embodiment of the present application provides a vehicle-mounted terminal, and the electronic device may be a device that implements the above-mentioned method in the first aspect.
- the electronic device may include: one or more processors; a memory in which instructions are stored; a touch screen, used to detect touch operations, and a display interface; when the instructions are executed by one or more processors, the electronic device The method of displaying the user interface of the first aspect is performed.
- the present application provides a vehicle-mounted terminal, which has a function of realizing the method for displaying a user interface of any one of the above-mentioned first aspects.
- This function can be realized by hardware, or by hardware executing corresponding software.
- the hardware or software includes one or more modules corresponding to the above-mentioned functions.
- an embodiment of the present application provides a computer storage medium, including computer instructions.
- the vehicle-mounted terminal is caused to execute the method described in the first aspect and any one of its possible implementations. method.
- the embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method described in the first aspect and any of the possible implementation manners.
- a circuit system in a sixth aspect, includes a processing circuit configured to execute the method for displaying a user interface as in any one of the above-mentioned first aspect.
- an embodiment of the present application provides a chip system, which includes at least one processor and at least one interface circuit.
- the at least one interface circuit is used to perform transceiver functions and send instructions to at least one processor.
- at least one processor executes the method for displaying a user interface as in the first aspect and any possible implementation manner thereof.
- FIG. 1 is a schematic structural diagram of a communication system provided by an embodiment of this application.
- FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
- FIG. 3 is a schematic structural diagram of another electronic device provided by an embodiment of this application.
- FIG. 4 is a schematic flowchart of a method for displaying a user interface provided by an embodiment of the application
- FIG. 5 is a schematic flowchart of a method for displaying a user interface according to an embodiment of the application
- FIG. 6 is a schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 7 is another schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 8 is another schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 9 is another schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 10 is another schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 11 is another schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 12 is a schematic diagram of yet another display user interface provided by an embodiment of the application.
- FIG. 13 is another schematic diagram of displaying a user interface provided by an embodiment of the application.
- FIG. 14 is a schematic diagram of yet another display user interface provided by an embodiment of the application.
- 15 is a schematic diagram of yet another display user interface provided by an embodiment of the application.
- FIG. 16 is a schematic structural diagram of a chip system provided by an embodiment of the application.
- Vehicle-mounted terminals (which can also be described as car machines) are mostly installed in the car's center console, which can realize the communication between people and cars, and between cars and the outside world (car and car).
- the vehicle-mounted terminal 200 may establish a communication connection with the electronic device 100 in a wired or wireless manner.
- the electronic device 100 in this application may be, for example, a mobile phone, a tablet computer, a personal computer (PC), a personal digital assistant (personal digital assistant, PDA), a smart watch, a netbook, a wearable electronic device, and augmented reality technology.
- augmented reality Augmented reality, AR
- VR virtual reality
- in-vehicle equipment smart cars, smart audio, robots, etc.
- This application does not impose special restrictions on the specific form of the electronic equipment.
- the car machine 200 and the electronic device 100 may be interconnected by adopting the mirrorlink standard, so as to realize the two-way control of the electronic device 100 and the car machine 200 of a specific application software.
- the user does not need to look at the screen of the electronic device 100, touch the screen of the electronic device 100, or operate the physical buttons of the electronic device 100 when the car is driving, only need to use the physical buttons on the car machine 200 and touch the screen of the car machine 200.
- To control the electronic device 200 including answering/making calls, listening to mobile phone music, navigating with the mobile phone, etc., of course, the mobile phone itself is also operable at this time.
- the vehicle 200 can receive the data sent by the electronic device 100, where the data sent by the electronic device 100 includes, but is not limited to: image information inside or outside the vehicle collected by a camera; detected by a built-in sound pickup device User voice and other sounds in the surrounding environment; data detected by sensors, etc.
- the car machine 200 can make a judgment based on its own data and the data obtained from the electronic device 100 to determine whether it is in a preset scene, such as bad weather or a traffic congested road section, so that the car machine 200 can execute Corresponding operations, such as displaying the corresponding interface or voice broadcast prompt.
- the vehicle machine 200 may not establish a communication connection with the electronic device 100.
- the car machine 200 can detect the user's touch operation through the touch screen; the user's operation of the physical buttons on the steering wheel is detected; the car machine 200 can use the vehicle's own camera device and sensing device to obtain data, and then determine whether the vehicle is Being in a preset scene allows the car machine 200 to perform corresponding operations, such as displaying a corresponding interface or a voice broadcast prompt.
- FIG. 2 shows a schematic diagram of the structure of the electronic device 100.
- the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
- Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
- SIM Subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
- the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
- the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
- the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units.
- the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
- AP application processor
- modem processor modem processor
- GPU graphics processing unit
- image signal processor image signal processor
- ISP image signal processor
- controller video codec
- digital signal processor digital signal processor
- DSP digital signal processor
- NPU neural-network processing unit
- the different processing units may be independent devices or integrated in one or more processors.
- the application processor may determine whether the user is in a driving state, and whether the user is in a driving state that requires a high degree of concentration based on, for example, the data obtained from the sensor (for example, the acceleration obtained by the acceleration sensor 180E, etc.). Scene. Thereby, it is determined whether it is necessary to disable part of the functions of the electronic device 200 or use voice to control part of the functions of the electronic device 200. It can be understood that some or all of the data processing work may also be responsible for GPU or NPU, which is not limited in the embodiment of the present application.
- the electronic device 100 captures the image of the surrounding environment of the vehicle through the camera 195, or the electronic device 100 receives the image of the surrounding environment of the vehicle sent by the electronic device 200, it can call the GPU or NPU of the electronic device 100 for image analysis to determine whether the vehicle is in high risk. Road section, or the current weather conditions, etc.
- the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 to store instructions and data.
- the memory in the processor 110 is a cache memory.
- the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
- the processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- UART universal asynchronous transmitter receiver/transmitter
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- SIM subscriber identity module
- USB Universal Serial Bus
- the charging management module 140 is used to receive charging input from the charger.
- the charger can be a wireless charger or a wired charger.
- the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
- the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
- the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
- the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be provided in the processor 110.
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
- the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
- the antenna can be used in combination with a tuning switch.
- the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
- the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
- the modem processor may include a modulator and a demodulator.
- the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor.
- the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
- the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
- WLAN wireless local area networks
- BT wireless fidelity
- GNSS global navigation satellite system
- FM frequency modulation
- NFC near field communication technology
- infrared technology infrared, IR
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
- the wireless communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna 2.
- the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
- the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
- GPS global positioning system
- GLONASS global navigation satellite system
- BDS Beidou navigation satellite system
- QZSS quasi-zenith satellite system
- SBAS satellite-based augmentation systems
- the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
- the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
- the display screen 194 is used to display images, videos, and the like.
- the display screen 194 includes a display panel.
- the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
- LCD liquid crystal display
- OLED organic light-emitting diode
- active-matrix organic light-emitting diode active-matrix organic light-emitting diode
- AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
- the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
- the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
- the ISP is used to process the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
- ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be provided in the camera 193.
- the camera 193 is used to capture still images or videos.
- the object generates an optical image through the lens and is projected to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
- ISP outputs digital image signals to DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
- the electronic device 100 may include one or N cameras 193, and N is a positive integer greater than one.
- Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
- MPEG moving picture experts group
- MPEG2 MPEG2, MPEG3, MPEG4, and so on.
- NPU is a neural-network (NN) computing processor.
- NN neural-network
- applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
- the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
- the internal memory 121 may be used to store computer executable program instructions, and the executable program instructions include instructions.
- the internal memory 121 may include a storage program area and a storage data area.
- the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
- the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
- the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
- the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
- the audio module 170 can also be used to encode and decode audio signals.
- the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
- the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
- the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
- the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
- the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
- the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
- the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
- the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
- the earphone interface 170D is used to connect wired earphones.
- the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, and a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA, CTIA
- the button 190 includes a power-on button, a volume button, and so on.
- the button 190 may be a mechanical button. It can also be a touch button.
- the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
- the motor 191 can generate vibration prompts.
- the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
- touch operations that act on different applications can correspond to different vibration feedback effects.
- Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
- Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
- the touch vibration feedback effect can also support customization.
- the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
- the SIM card interface 195 is used to connect to the SIM card.
- FIG. 3 shows a schematic diagram of the structure of the electronic device 200.
- the electronic device 200 may include a processor 210, a memory 220, a wireless communication module 230, a speaker 240, a microphone 250, a display screen 260, a camera 270, a USB interface 280, a sensor module 290, and so on.
- the sensor module 290 may include a pressure sensor 290A, a magnetic sensor 290B, an acceleration sensor 290C, a temperature sensor 290D, a touch sensor 290E, an ambient light sensor 290F, a positioning system 290G, a rainfall sensor 290H, and so on.
- the electronic device 200 can also determine whether the vehicle is in a motion state, that is, whether the user is in a driving state, according to the data of the acceleration sensor 290C.
- the electronic device 200 can determine whether the vehicle is in a thunderstorm based on the data of the rain sensor 290H, or the electronic device 200 can send the data of the rain sensor 290H to the electronic device 100, so that the electronic device 100 can determine whether the vehicle is in a thunderstorm. Wait.
- the electronic device 200 can determine whether the vehicle is in dense fog or darkness according to the ambient light sensor 290F.
- the electronic device 200 may send the data of the ambient light sensor 290F to the electronic device 100 so that the electronic device 100 can determine whether the vehicle is in an environment such as dense fog or darkness.
- the processor 210 may include one or more processing units, and different processing units may be independent devices or integrated in one or more processors.
- the electronic device 200 may obtain, for example, sensor data on the electronic device 200 or the electronic device 100 to determine the state of the user and the scene in which it is located, so as to determine to display the corresponding user interface.
- multiple applications may run on the electronic device 200, and these multiple applications may directly communicate with the application server. For example: receiving new messages or incoming call reminders sent by the application server, and sending the corresponding operations performed on the user interface to the application server.
- the electronic device 200 may obtain, for example, data from one or more sensors or receive data sent by the electronic device 100, and determine the state of the user and the scene in which it is located, to determine whether to display a user interface with some functions and/or operations disabled.
- the memory 220 may be used to store computer-executable program instructions.
- the internal memory 220 may include a storage program area and a storage data area.
- the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
- the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 200.
- the memory 220 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
- the processor 210 executes various functional applications and data processing of the electronic device 200 by running instructions stored in the memory 220 and/or instructions stored in a memory provided in the processor.
- the wireless communication module 230 may provide a wireless communication solution including WLAN, such as Wi-Fi network, Bluetooth, NFC, IR, etc., applied on the electronic device 200.
- the wireless communication module 230 may be one or more devices integrating at least one communication processing module.
- the electronic device 200 may establish a wireless communication connection with the electronic device 100 through the wireless communication module 230.
- the electronic device 200 may also establish a wired communication connection with the electronic device 100 through the USB interface 280, which is not limited in the embodiment of the present application.
- the electronic device 200 can capture images of the surrounding environment of the vehicle through the camera 270, so that the electronic device 200 or the electronic device 100 can perform image analysis to determine the current scene of the user, for example, whether the vehicle is driving in High-risk sections, etc.
- the functions of the components such as the speaker 240, the microphone 250, and the display screen 260 can be referred to the related description in the electronic device 100, which will not be repeated here.
- the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 200.
- the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
- the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
- touch-sensitive vehicle-mounted systems have gradually become the standard configuration of vehicles of various manufacturers.
- the touch screen of the vehicle-mounted system cannot provide tactile feedback, it will occupy more visual resources of the user during driving. , which makes users unable to concentrate on driving and affects driving safety.
- users can also choose to connect their mobile phone to the car, and the interaction of the functions of the car and the mobile phone will further enrich the use of the car system.
- the user may pay attention to the display screen of the in-vehicle system to process the notification messages received by the mobile phone or use the richer entertainment functions of the mobile phone.
- the user drives the vehicle at high speed or in some specific scenarios, for example, the user drives the vehicle in a densely crowded section, the user drives the vehicle through a high-risk section, the user drives the vehicle at night, the user drives the vehicle in bad weather, or drives the vehicle
- users should devote more energy to focused driving instead of being distracted. That is, in some cases, a user interface that can reduce the visual resources and cognitive resources occupied by the vehicle display screen should be provided to ensure driving safety.
- the embodiment of the present application provides a method for displaying a user interface.
- the car machine After the car machine starts to work, it can automatically recognize the various scenes in which the vehicle is located, and display different user interfaces based on different scenes.
- the corresponding user interface can be displayed according to the changed scene.
- the displayed user interface may include a first user interface and a second user interface.
- the first user interface is a user interface that does not display some functions or displays but cannot operate some functions
- the second user interface is a user interface that displays all functions.
- the above-mentioned functions include some functions and some operations of some functions, among which some functions may be presented in the form of mobile phone software (application, APP).
- the touch screen of the car machine may display the second user interface.
- the vehicle touch screen displays the first user interface.
- the method of displaying the user interface in the embodiments of the present application can be pre-configured in the vehicle terminal, so that all vehicle terminals use this method to display the user interface; or according to user selection, the vehicle terminal does not use the display user interface after user settings Methods.
- the above-mentioned specific scene that requires a high degree of concentration is, for example, a user driving a vehicle at a high speed (the first preset speed can be determined according to the performance of the vehicle, and it is determined that the real-time vehicle speed is higher than the first preset speed is high-speed driving), and the user drives the vehicle at a high speed.
- the user is in a densely populated road section (such as commercial centers, tourist attractions, school sections, etc.)
- high-risk sections such as continuous turns, highways, tunnels, etc.
- the user drives the vehicle at night (e.g., according to The time of the sun setting is determined, and the light intensity is determined according to the driving state.
- the user drives the vehicle in severe weather (such as dense fog, ice and snow, thunderstorm, etc.), and the user is on the speed limit section (such as downhill section, village section, road Intersections, etc.)
- the speed limit section such as downhill section, village section, road Intersections, etc.
- the car machine cannot accurately identify the scene the user is in at this time.
- the recognition results of the mobile phone and the car machine can also be combined to more accurately identify the user's scene.
- the touch screen of the car machine can display a user interface containing all functions.
- the vehicle touch screen displays hidden or does not allow the operation of some functions and/or the user interface after the operation.
- the user may start to be in a scene that does not require a high degree of concentration, and operates certain functions on the second user interface of the touch screen of the car, and during the operation, the user enters a scene that requires a high degree of concentration.
- the car will immediately interrupt the current operation and switch to display the first user interface to ensure driving safety.
- the user interface can be switched after the user completes the current operation to ensure the user's sense of experience.
- the embodiments of the present application do not make specific limitations.
- the vehicle starts to display the user interface of the touch screen, and at this time it is determined that the vehicle starts to work.
- the user interface at this time is the default main page, that is, the second user interface including all functions is displayed (see FIG. 6).
- the embodiment of the application does not limit whether the mobile phone is connected to the car machine, that is, the mobile phone can be connected to the car machine or not connected to the car machine. This step only considers whether the car is working, and does not need to consider the connection status of the mobile phone and the car.
- the car When the mobile phone is not connected to the car, the car will recognize the scene.
- the data of the mobile phone can be synchronized with the car machine.
- the car machine combines the data collected by the mobile phone to identify the scene the user is in, and then more accurately displays the user interface corresponding to the scene.
- the car machine can undertake all or part of the data processing work of the mobile phone, and the embodiment of the present application does not limit the division of labor between the car machine and the mobile phone.
- step S402 The vehicle machine determines whether the vehicle is in a driving state. If it is not in a driving state (that is, in a non-driving state), step S403 is executed. If it is in the driving state, step S404 is executed.
- information such as the speed per hour of the vehicle measured by the vehicle can be obtained to determine whether the vehicle is moving, thereby determining whether the vehicle is in a driving state. It is also possible to determine the real-time position of the vehicle through the global positioning system (GPS) of the vehicle, and calculate the moving speed of the vehicle based on the real-time position. Then, it is judged whether the moving speed of the vehicle is greater than the threshold value. If the moving speed of the vehicle is greater than the threshold, the vehicle can be considered to be in a driving state. If the moving speed of the vehicle is not greater than the threshold, it can be considered that the vehicle is not in a driving state.
- GPS global positioning system
- the car can also obtain the data of the mobile phone, and obtain the moving speed of the mobile phone through a sensor (such as an acceleration sensor) provided in the mobile phone, so as to determine whether the vehicle is in a driving state.
- a sensor such as an acceleration sensor
- the touch screen of the vehicle machine displays the second user interface.
- the vehicle touch screen can be there.
- the second user interface displays the first function and the second function in a second preset mode, and the second preset mode is used to indicate that the user is allowed to operate the first function and the second function.
- the first function and the second function constitute all the functions displayed on the touch screen of the vehicle, that is, the second user interface at this time displays all the functions for the user to choose and use.
- the user interface includes a main menu 601 displayed on the left side and a display interface 602 displayed on the right side.
- the main menu 601 displays the main functions included in the vehicle, and the display content in the display interface 602 corresponds to each function in the main menu 601 on the left, respectively displaying a different user interface.
- the main menu 601 of the user interface shown in FIG. 6 exemplarily gives five functions of homepage, navigation, call, music, and car control.
- the touch screen will display all the commonly used functions on the home page interface (that is, the corresponding display interface 602 after selecting the "Home" function on the main menu 601), as shown in Figure 6, exemplarily shows when the main menu 601 is selected.
- the display interface 602 corresponds to the displayed user interface, that is, the six commonly used functions of map, WeChat, contacts, telephone, entertainment, and weather included in the homepage interface.
- the car machine should also contain other main or common application functions that can be displayed on the car machine touch screen, and based on touch operation, the current homepage interface can be switched to display more common functions.
- the embodiment of this application does not make a specific description.
- the user can click the corresponding function step by step to enter the sub-menu according to the needs, and realize the final function. For example, if the user wants to use the car to perform a call operation, he can touch the phone icon to enter the call interface, and then he can perform a dialing or phone query operation to realize a phone call. Or as shown in Figure 7, the user clicks the call function on the main menu 601 interface on the left side of the screen, and the display interface 602 correspondingly displays the call interface. In the call interface, the user can be provided with dialing operations, contact search, and recent contacts. List browsing (this interface is not shown in FIG. 7, but it is understandable that the functional interface can be entered through related operations) and other operations.
- step S404 The vehicle machine determines that the vehicle is in a preset scene or the current vehicle speed is greater than a first preset speed. If the vehicle determines that the vehicle is in the preset scene or the current vehicle speed is greater than the first preset speed, step S405 is executed; if the vehicle determines that the vehicle is not in the preset scene and the current vehicle speed is less than or equal to the first preset speed, then step S403 is executed.
- the vehicle touch screen displays the first user interface.
- the preset scene may be, for example, when the user drives a vehicle on a road section with a dense flow of people (such as a commercial center, a tourist attraction, a school section, etc.), the user drives a vehicle through a high-risk section (such as a continuous turn, a highway, a tunnel, etc.).
- a high-risk section such as a continuous turn, a highway, a tunnel, etc.
- the user is driving the vehicle at night (for example, it can be determined according to the time of the sun setting, according to the light intensity, etc.)
- severe weather for example: dense fog, ice and snow, thunderstorm, etc.
- driving a vehicle on a speed limit section for example, a downhill section, a village section, a road intersection, etc.
- the current scene is the above preset scene, it can be the conclusion drawn by the vehicle machine based on the sensor data or other methods to analyze and judge.
- the judgment methods of five preset scenes are given, please refer to the detailed description below. .
- Preset scene 1 The vehicle is on a road section with dense traffic.
- the vehicle machine judges whether the vehicle is in a crowded road section at least according to at least one of the following data: the driving state of the vehicle, the image of the surrounding environment of the vehicle or the driving route. For example, it can be determined whether the vehicle is located in a densely crowded road section according to the driving speed of the vehicle. You can also determine whether the vehicle is in a crowded area such as a commercial center based on the location of the vehicle.
- the vehicle machine can obtain the change of the vehicle speed over a period of time. If the vehicle speed changes repeatedly within the preset time period, and the vehicle speed is low, that is, the vehicle is at the preset time Repeated braking in the segment, it can be determined that the vehicle is in the above-mentioned crowded road section and the vehicle cannot drive smoothly.
- the navigation software contains road information for various road sections, such as a certain commercial center section, a certain tourist attraction, a school section, etc.
- these road sections are generally crowded with people. Therefore, the current location of the vehicle is determined according to the car machine and/or the mobile phone, and the current location and the road information of the location in the navigation software are used to determine whether the vehicle is in the above-mentioned crowded road section.
- the camera inside the vehicle or the camera connected to the vehicle can be used to identify the environment outside the vehicle based on the image of the surrounding environment of the vehicle captured by the camera, and then determine whether the vehicle is in a crowded area.
- Road section a numerical value can be preset, for example, when there are more than 10 people around the vehicle, it is determined that the vehicle is on a road section with a dense flow of people.
- Preset scenario 2 The vehicle is passing through a high-risk road section.
- the vehicle machine judges whether the vehicle is in a high-risk road section at least according to at least one of the following data: the driving route of the vehicle, the driving state or the image of the surrounding environment of the vehicle. For example, it can be determined whether the vehicle is turning continuously according to the driving route of the vehicle. It is also possible to determine whether the vehicle is on a highway based on the speed of the vehicle. It is also possible to determine whether the vehicle is driving on a slippery road or not according to the driving route and speed of the vehicle.
- the light sensor can also be used to determine the light around the vehicle to determine whether it is located in a tunnel or the like.
- the camera inside the vehicle or the camera connected to the vehicle can also be used to identify the environment outside the vehicle based on the image of the surrounding environment of the vehicle captured by the camera, and then determine whether the vehicle is in the above-mentioned environment. High-risk sections.
- the navigation software contains the road condition data of each road section (for example: sharp turns, tunnels, etc.)
- the current position of the vehicle can also be determined according to the mobile phone or vehicle, and the current position and the position in the navigation software Road condition data to determine whether the vehicle is in the above-mentioned high-risk road section.
- Preset scene 3 The vehicle is driving at night.
- the vehicle machine judges whether the vehicle is in a night driving state at least according to at least one of the following data: the ambient light condition of the vehicle, the image of the surrounding environment of the vehicle or the real-time time. For example: when the light intensity is lower than the threshold and the duration exceeds the threshold, it is determined that the user is driving at night. Or, if the user drives the vehicle during the period from one hour after the sun sets to one hour before the sun rises, it is determined that the user is driving at night.
- image analysis technology can be used to identify the light intensity of the environment in which the vehicle is located based on the image of the surrounding environment captured by the camera of the vehicle and/or mobile phone, and determine whether the user is driving at night.
- the light around the vehicle can be judged according to the light sensor of the vehicle, and the light duration can be further combined to more accurately determine whether the user is driving at night.
- the start time of night driving according to the sun set time of the current time period and location coordinates, and to determine the end time of night driving according to the sun rise time. For example, if the sun rises at 7:00 in the current season and the sun sets at 18:00 in the current season, you can define the night driving period from 19:00 to 6:00 the next day, and the user driving at this time is night driving .
- Preset scene 4 The current weather conditions are bad.
- the vehicle machine judges whether the vehicle is driving in severe weather conditions at least according to at least one of the following data: the ambient light condition of the vehicle, the image of the surrounding environment, or the data of the weather software.
- the surrounding environment of the vehicle can be determined according to the light sensor in the vehicle, for example, the vehicle is in dense fog or darkness. It is also possible to determine whether the vehicle is in a thunderstorm or the like according to the rainfall sensor in the vehicle.
- a camera connected inside the vehicle can also use image analysis technology to identify the environment outside the vehicle based on the image of the surrounding environment of the vehicle captured by the camera, and then determine whether the vehicle is in the above-mentioned severe weather.
- the current weather conditions can also be obtained through weather application software to determine whether the vehicle is in the above-mentioned severe weather.
- Preset scene five The vehicle is on a speed limit road section.
- the vehicle machine judges at least based on at least one of the following data: the image of the vehicle's surrounding environment, and the current road information.
- the current road information can be determined through the navigation software in the car machine and/or the mobile phone, so as to determine whether the user is on the speed limit road section.
- General special road sections such as: downhill sections, village sections, road intersections, etc., will be subject to vehicle speed limit to ensure driving safety, and the navigation software will contain special section information, and the car machine can confirm whether the user is based on the user's vehicle driving position In the speed limit section.
- a camera connected in the vehicle can also be used to determine whether the vehicle is on a speed limit road section based on the speed limit signs set on the road section around the vehicle captured by the camera and using image analysis technology.
- the vehicle machine can obtain the corresponding data according to the above method to determine whether the vehicle is in a preset scene or in which one or several preset scenes, and then determine whether the user needs to be highly concentrated at this time.
- the vehicle speed is a major factor that affects the user's attention.
- the first preset speed is set to determine whether the current vehicle is in a high-speed driving state.
- the first preset speed is a threshold determined according to vehicle performance or directly preset.
- the vehicle can obtain information such as the measured vehicle speed in real time, and compare the real-time vehicle speed with the first preset speed, so as to determine whether the user is driving the vehicle at high speed.
- the first preset speed can be set to 60km/h, and when the user drives the vehicle at a speed greater than 60km/h, it is considered that the user is driving the vehicle at a high speed.
- the current user’s vehicle has better performance
- the first preset speed can be set to 65km/h; or, if the user currently using the vehicle is a novice driver or the user is older, you can set a lower first preset speed , Such as 50km/h.
- the first preset speed can also be set according to other conditions, which is not specifically limited in the embodiment of the present application.
- the real-time position of the car can also be determined through the GPS of the car, and the moving speed of the car can be calculated based on the real-time position. Then, it is determined whether the moving speed of the vehicle is greater than the first preset speed. If the moving speed of the vehicle is greater than the first preset speed limit, it can be considered that the user is driving the vehicle at a high speed. If the moving speed of the vehicle is not greater than the first preset speed, it can be considered that the user is not driving the vehicle at a high speed.
- the car can also obtain the data of the mobile phone, and obtain the moving speed of the mobile phone through a sensor (such as an acceleration sensor) set in the mobile phone, so as to determine whether the user is driving the vehicle at a high speed.
- a sensor such as an acceleration sensor
- step S404 the vehicle first judges whether the user is driving the vehicle in a preset scene. If it is in the preset scene, step S405 can be executed directly, that is, when the vehicle is in the preset scene, the user should concentrate on driving the vehicle, and should not be too distracted to operate the vehicle. If it is not in the preset scene, it is further judged whether the current vehicle speed is greater than the first preset speed, that is, whether the vehicle is in a high-speed driving state. If the current vehicle speed is greater than the first preset speed, step S405 is executed. Even if the vehicle is not in a preset scene, the user must concentrate on driving the vehicle when the vehicle is in a high-speed driving state.
- step S403 is executed, that is, only when the vehicle is not in the preset scene and the current vehicle is not in a high-speed driving state, the vehicle can display the user interface of all functions for the user to use .
- step S404 the vehicle first judges whether the current vehicle speed is greater than the first preset speed. If the current vehicle speed is greater than the first preset speed, that is, the current vehicle is in a high-speed driving state, step S405 is directly executed, that is, when the vehicle is in a high-speed driving state, the user must highly concentrate on driving the vehicle. If the current vehicle speed is less than or equal to the first preset speed, it is further judged whether the vehicle is in a preset scene. As described above, if it is in the preset scene, step S405 is executed. If it is not in the preset scene, that is, when the vehicle is not in the preset scene and the current vehicle is not in a high-speed driving state, step S403 is executed.
- the touch screen of the vehicle machine displays the first user interface.
- the first user interface does not display the first function; or the first user interface displays the first function in a first preset manner, and the first preset manner is used to indicate that the first function cannot be operated by the user.
- the second function is displayed in the second preset mode in the first user interface, and the second preset mode is used to indicate that the user is allowed to operate the second function.
- the first function is a function that occupies more visual resources and cognitive resources of the user during operation
- the second function is a function other than the first function
- the second function occupies less visual resources or cognitive resources of the user.
- the functions in the embodiments of the present application refer to certain functions or certain operations of certain functions. This function may be presented in the form of an APP installed on the electronic device, and the operation may be an operation option provided by the APP installed on the electronic device.
- the function that occupies more visual resources and cognitive resources of the user refers to an operation that requires the user to further think and require visual participation. For example: when dialing, the user needs to remember the number to be dialed in his mind first, and need to use visual resources to click the corresponding digits on the touch screen in order and ensure that the click is correct, in order to achieve the correct dialing operation, which will take up a lot of users Visual resources and cognitive resources.
- the navigation function the user may search for a location. At this time, it is necessary to use visual resources to input the location on the touch screen, and the accuracy of the input must be ensured. Sometimes the navigation function will recommend several routes.
- the user also needs to think further to confirm the final route, which will inevitably occupy more visual resources and cognitive resources of the user. Occupation of visual resources and cognitive resources will divert a large amount of attention of the user, making the user unable to concentrate on driving the vehicle. Therefore, when the vehicle is in a preset scene or at a fast speed, it is necessary to focus on driving the vehicle to ensure driving safety.
- the vehicle needs to hide these first functions that occupy more visual and cognitive resources of the user or make the user unable to Operate the first function to ensure driving safety.
- the above-mentioned first function includes any one or more of the following: entertainment function, information function, dial function in call function, search contact function in call function, search address function in navigation function, search in music function Song function, weather query function in weather function.
- the second function includes any one or more of the following: the recent contact function of the call function, the recommended address function of the navigation function, the recent playlist function in the music function, and the local real-time weather function in the weather function.
- first function and second function are merely exemplary descriptions, and the first function and second function may also include other functions.
- the above-mentioned division of the first function and the second function may be pre-configured by the manufacturer based on experience, or may be determined by the car machine according to the user's use situation, which is not specifically limited in the embodiment of the present application.
- the WeChat function and entertainment function may occupy more visual or cognitive resources of the user when used, for example, reply to WeChat
- the WeChat function and entertainment function are determined as the first function
- the map function, contact function, telephone function, and weather function are determined as the second function. Therefore, as shown in FIG.
- the first user interface does not display the first function, but only displays the second function. That is, the first function is hidden at this time, so that the user cannot operate the first function, but can operate the second function.
- the first function cannot be operated by the user by drawing a cross on the main screen, but the second function allowing the user to operate is normally displayed.
- a method of turning the icon gray (the function cannot be used after clicking) can also be used to make the first function unable to be operated by the user, which is not specifically limited in the embodiment of the present application.
- a first user interface is exemplarily provided, and the first function in the first user interface is part of the operation of the call function.
- Some operations of some functions on the vehicle touch screen will occupy more visual or cognitive resources of the user, but such functions are frequently used or must be used by users, and such functions cannot be completely disabled by the above methods (hidden) Or operation is not allowed). Therefore, the operations that occupy more visual resources or cognitive resources of the user in such functions can be disabled, so that the user can concentrate on driving the vehicle.
- the call function is an indispensable function. In order to avoid missing important contact messages, the car machine cannot directly disable this function, but in order to ensure the safety of the user driving, part of the operation of this function can be disabled.
- the call interface shown in Figure 9 compared to the call interface shown in Figure 7, the call interface shown in Figure 9 only retains the call log interface, and does not display the dialing and contact search functions, so that the user can click only once to do it Call back the most recent contact. Realize call contact without occupying high visual resources or cognitive resources.
- the navigation function is a necessary function for the user to drive the vehicle, and the use of this function cannot be completely prohibited. Therefore, the car machine only provides address recommendations on the navigation interface, and the user obtains an optimal driving route by clicking on the corresponding address, and then realizes the navigation function. It is forbidden to search for addresses through the touch screen, so as to ensure the user's driving safety.
- the method for displaying the user interface can set the preset scene and the first preset speed, and then during the running of the vehicle, the car machine can determine whether the current vehicle is in the preset scene or whether the current speed is greater than the first preset speed. A preset speed. If it is determined that the vehicle is in a preset scene or the current speed is greater than the first preset speed, the vehicle will determine to display the first user interface, that is, when the user is required to concentrate on driving the vehicle, the vehicle can pass Does not display or does not allow the user to operate the functions and/or operation methods that occupy more visual resources or cognitive resources of the user, so as to ensure the driving safety of the user.
- the user interface displays all functions at all times, so that when the user needs to concentrate on driving the vehicle, he also needs to distract the operation of functions that occupy high visual resources and cognitive resources, which makes driving unsafe.
- the first user interface of the car machine may provide an operation portal for the user, so that the user can operate the first function in other ways that occupy less visual resources or cognitive resources of the user.
- the first preset button may be set to provide the user with the above-mentioned operation entry, and the first preset button may be used in combination with the voice function of the car machine or mobile phone to guide the user to perform related operations, so as to improve the user experience.
- a first preset button is added on the basis of the user interface shown in (b) of FIG. 8.
- a first preset button is added on the basis of the user interface shown in FIG. 9.
- the first preset button provides the user with access to such high-resource-consuming functions or operations (occupying more visual resources and cognitive resources of the user), so that the user can use the disabled functions or operations.
- the user can operate the preset button for the first operation by clicking, dragging or other types of actions.
- the screen detects the user's related operation of the preset button for the first operation, it indicates that the user wants to use this type of high resource-intensive function Or operation, at this time, the car machine will execute step S406 to guide the user to operate these functions in the form of voice broadcast.
- the first preset button may be set on the first user interface by the vehicle, or may be set to be displayed on the second user interface. So that when the car touch screen displays all functions, it can also provide users with other ways to operate the car touch screen that takes up less visual resources or cognitive resources of the user. And in order to facilitate user operations, the first preset button may be configured as a floating button.
- the user interface and display form displayed by the first preset button are not specifically limited in this embodiment of the application.
- the first preset button may not be set, but the first function displayed in the first preset mode is combined with the voice function of the car machine or mobile phone to guide the user to perform related operations. That is, when the first user interface detects the user's operation on the first function icon, the voice control mode can be entered. Since the first user interface can be a user interface that does not display the first function as shown in (a) of FIG. 8, or a user interface that displays the first function in a first preset manner as shown in (b) of FIG. The second preset mode displays the user interface of the second function. At this time, the user cannot directly lock the first function or needs to select the required first function from all the functions, so the selection of the first function will occupy a certain amount of attention of the user.
- the vehicle touch screen can detect the user's first operation on the first user interface.
- the vehicle touch screen displays a third user interface.
- the first preset mode displays all the first functions, and only the first functions are displayed on the current third user interface so that the user can quickly lock the required functions.
- the first operation is any one of the following double-click or sliding operations on the touch screen of the vehicle machine by the user: tapping operation, sliding operation, preset pressing gesture operation, preset gesture.
- the click operation may be a single click, double click or multiple click operations by the user in the non-functional icon display area of the first user interface.
- the sliding operation is a vertical or diagonal sliding operation that is distinguished from the way of switching the display screen of the existing vehicle touch screen, or is a multi-finger sliding operation.
- the preset pressing gesture operation is a pressing gesture set by the user according to usage habits, such as single-finger or multi-finger pressing.
- the preset gesture operation is a gesture set by the user according to usage habits, for example, "draw a circle” and so on.
- the embodiment of the present application does not specifically limit the first operation.
- the vehicle can meet the user’s needs for certain functions or operations that are not allowed to be used in the current vehicle state. On the premise of ensuring safe driving, the user experience is improved.
- the car machine starts the voice interaction function.
- the second operation is to operate the first preset button or the icon of the first function, which is used to activate the first function.
- the voice control mode is used for voice interaction between the user and the car.
- the car machine prompts the user to operate the first function through the third operation according to the voice interaction function, and informs the user of the operation method of the third operation.
- the third operation is an operation in the form of voice interaction.
- the voice control mode is entered, and the user interface shown in FIG. 13 or 15 is displayed.
- the voice control mode is entered, and the user interface shown in FIG. 13 or 15 is displayed.
- the voice control mode is entered, and the user interface shown in FIG. 13 or 15 is displayed.
- the voice control mode is entered, and the user interface shown in FIG. 14 is displayed.
- the first function is some function, as shown in FIG. 13, the first function is operated in the voice control mode.
- the user selects the “Home” function in the menu bar 601
- he operates the first preset button on the user interface corresponding to the display interface 602 to enter the voice control mode.
- the vehicle will announce “the current operation is at risk” to remind the user of this. Class features will increase driving risk.
- the car machine continued to announce "What help is needed” to ask the user what operation needs to be performed at this time.
- the user can answer the operation he wants to perform. For example, as shown in FIG. 13, the user can answer "Check the latest WeChat message” if he wants to view the WeChat (information) message.
- the car machine can convert text into voice for broadcast without the user's participation in the operation, and then can realize this operation, the judgment of the WeChat function that the current user wants to use can be realized through voice control, that is, it is judged that the user can use the WeChat function. Therefore, the car machine can voice broadcast "WeChat has been opened”. After that, play the latest WeChat message "Mum's Message "When will I get home”" requested by the user.
- the user can directly control the car by voice to perform the desired control, such as using the car to reply to information, and can directly voice command the car to "reply "half an hour”.”
- the car machine received the command, and voice broadcast the reply "has been replied “half an hour”", so that the user can confirm the final result.
- the user needs to voice control the car and re-enter the relevant content.
- the user can order the car to "the current message is wrong” and “re-respond to "half an hour”” so that the car can perform the correct operation. If it is correct, the user does not need to perform the operation again.
- the user can also directly enter the voice control mode by operating the icon of the first function through the user interface shown in FIG. 10 or FIG. 12.
- the first function is certain operations of certain functions, referring to FIG. 14, the first function is operated in a voice control mode.
- the user finds in the call interface as shown in Figure 11 that there is no contact person he wants to contact in the recent call record, such as "Mom", he cannot perform a dialing operation, so that he cannot make a call.
- the user can operate the first preset button in FIG. 11 to display the user interface that can perform voice control as shown in FIG. 14.
- the car will broadcast the voice "May I ask what help is needed”.
- the car machine judges whether the operation that the current user wants to perform can be completed by voice operation.
- the operation of making a phone call can be completed directly by the car and the machine without manual control by the user. Therefore, the car machine can perform a dialing operation while voice broadcasting "Dialing mother's home number" to inform the user that the required operation is currently being performed. Meet customer needs while ensuring driving safety.
- the vehicle when the user performs the second operation to start the voice control mode to operate the first function, the vehicle will evaluate the first function to confirm whether the first function can be operated in the voice control mode.
- the WeChat function can be operated by the conversion of text and voice.
- certain entertainment functions require high visual and cognitive resources of the user, and cannot be operated by voice, which will prompt the user that such functions will increase driving risk and cannot be used.
- the vehicle will announce "the current operation is at risk” to remind the user that operating the first function at this time will increase driving risk.
- the car machine continued to announce "May I ask what help is needed”.
- the user can answer the desired operation, such as the entertainment function shown in FIG. 10, and the user can voice command the car to "turn on the entertainment function".
- the car machine needs to determine whether the function activated by the current user command can be operated by voice control. Since the entertainment function requires manual participation by the user and cannot rely solely on voice operation, the car machine determines that the function cannot be used in the current state, and then voice broadcasts "The current scene entertainment function is risky and cannot be run" to prevent users from using this function and ensure users' driving safety.
- the car machine can display the corresponding user interface according to the scene of the current car. It is also possible that when the user completes the current operation in the voice control mode, the car machine does not need to switch the user interface immediately according to the scene of the current vehicle, but keeps displaying the current user interface for a preset time period, so that the user can continue to use the voice control
- the mode operates the current function, so that the user does not need to re-operate the first operation button to enter the current function.
- the current user interface includes a "microphone" icon, within a preset time period, the user can continue to use the current function to perform related operations by clicking the icon.
- the “microphone” icon is only an exemplary representation, and other icons or other forms may also be used to make the vehicle meet the above-mentioned functions, which is not specifically limited in the embodiment of the present application.
- the above icon may not be set to realize the continued use of the voice control mode.
- the car machine can be directly configured in the voice control mode. After the user's current voice operation is completed, it will remain in the voice control mode for a preset period of time. So that users can directly use voice to operate the functions on the touch screen of the car. This embodiment of the present application does not specifically limit this.
- step S404 when the vehicle determines that the vehicle is in the preset scene, referring to FIG. 5, it is not necessary to perform step S405 immediately, but to perform step S407 first to determine whether the current vehicle speed is greater than the second preset speed.
- the second preset speed is the minimum speed limit corresponding to the preset scene. That is, by performing step S407, when the user's vehicle is in a preset scene and the vehicle speed is low, all the functions on the vehicle touch screen can also be used to provide the user with a better experience.
- step S407 The vehicle machine determines that the current vehicle speed is greater than the second preset speed. If the vehicle determines that the current vehicle speed is greater than the second preset speed, step S405 is executed. If the vehicle determines that the current vehicle speed is less than or equal to the second preset speed, step S403 is executed.
- the second preset speed is the minimum speed limit in the speed limit corresponding to the preset scene, and different speed limits (second preset speed) can be set correspondingly according to different preset scenes, and all the second preset speeds
- the speed set constitutes a second preset speed set.
- the user may be in several preset scenes at the same time. For example, when a vehicle travels through a high-risk road at night, when the vehicle determines that the vehicle is in two preset scenes "night driving" and "high-risk road", it needs to obtain the two second scenes corresponding to the current two scenes according to the second preset speed set. Two preset speeds, and then obtain a smaller second preset speed as a judgment basis for displaying the user interface of the vehicle.
- the second preset speed corresponding to the current scene can be obtained according to the second preset speed set, and the second preset speed can be used as a judgment basis for displaying the user interface of the vehicle.
- the speed limit corresponding to the preset scene can also be uniformly formulated as a certain second preset speed, and when the user's current vehicle speed is higher than the second preset speed, it is determined as "yes".
- the above-mentioned second preset speed set and the second preset speed are preset by the manufacturer according to the vehicle performance or Cui Ci, which is preset by the manufacturer according to the user's operating habits, and this embodiment of the application does not specifically limit it.
- different speed limit limits are respectively set according to different preset scenes to form a second preset speed set.
- the second preset speed is set to 30km/h; when driving at night, the second preset speed is set to 40km/h; when driving on high-risk roads, the second preset speed is set to 15km/h. h; when driving in bad weather, the second preset speed is set to 20km/h; when the speed limit is A road section, the second preset speed is set to 50%*A, for example, when the current speed limit is 60km/h, then Set the second preset speed to 30km/h.
- the second preset speed set ⁇ 30, 40, 15, 20, 50%*A ⁇ km/h can be obtained.
- the vehicle determines the corresponding second preset speeds according to the preset scene where the current user is located, and then obtains the smallest second preset speed among all the second preset speeds corresponding to the scene where the current vehicle is located.
- the current vehicle speed is greater than the minimum second preset speed, indicating that the vehicle is currently in a scene that requires a high degree of concentration, and the user is required to focus on the driving of the vehicle. Therefore, step S405 is executed, that is, the vehicle display is hidden or not allowed The user operates the first user interface of the first function.
- a unified second preset speed can be formulated for the preset scene.
- the second preset speed is uniformly set to 20 km/h.
- the current vehicle speed of the user driving the vehicle it is determined whether the current vehicle speed is greater than the second preset speed, and then it is determined whether the user currently needs a high concentration of attention, that is, the corresponding user interface of the vehicle is displayed.
- the second preset speed is formulated in combination with different conditions of different users or different conditions of vehicle performance.
- the second preset speed of the corresponding scene can be set according to factors that affect the user's driving proficiency, such as the user's driving years. For example, for a novice driver, when the vehicle speed exceeds 55km/h, he is in a high-speed driving state, and too much attention cannot be devoted to operating the vehicle; but for an old driver, when the vehicle speed exceeds 65km/h, he cannot drive too much. Attention is used for car and machine operation. That is, the user's driving proficiency can be determined according to factors such as the user's driving age, and then different second preset speeds are set for different scenarios to form a second preset speed set.
- the user can select a different second preset speed set or the manufacturer can configure it for the first time and then change it according to the annual inspection.
- the car machine can also configure the corresponding mode for the user according to the user's operation.
- different configuration modes are different second preset speed sets configured for different driving proficiency levels. For example: users can be divided into three levels according to their driving proficiency: A-level, B-level and C-level. Then it can be considered that A-level users are the most proficient in driving.
- the second preset speed corresponding to different preset scenes for A-level users can be set to a higher speed; the driving proficiency of B-level users is at a normal level, and you can set B level
- the second preset speed corresponding to the user in different preset scenes is set to a medium level, and the second speed set corresponding to the B-level user can be set as the default second preset speed set, so that the vehicles whose driving proficiency cannot be determined are all configured with B
- C-level users can use the second set of preset speeds set for novice drivers.
- the second preset speed of the corresponding scene can be set according to factors that affect the driving state of the user, such as the age of the user. For example, if the user suffers from heart disease and cannot better cope with emergencies, then the user's second preset speed set needs to be set lower to ensure the user's driving safety. Or, if the user is older, the response speed will be slower, and the second preset speed set also needs to be set lower.
- the performance of the vehicle will also differ in its ability to handle emergencies. Therefore, the second preset speed set corresponding to different preset scenes can be set according to different vehicle performance. For example, a vehicle with poor performance may also have poor braking function, and thus cannot brake in time to ensure the safety of users in the face of emergencies. Therefore, it is also necessary to formulate a lower second preset speed set so that the user can touch and operate all the functions of the vehicle touch screen at a lower speed, thereby ensuring that the user's attention is sufficient to deal with emergencies.
- the first time threshold may also be set. For example, when the user interface is the first user interface, it is possible that the next moment the user is in a scene that satisfies the condition for the vehicle touch screen to display a user interface that includes all functions, and there is no need to switch the interface immediately. However, it is necessary to switch the user interface after the current state retention time exceeds the first time threshold. Because the driving situation of the vehicle on the road is more complicated, there may be unexpected situations that temporarily reduce the speed of the vehicle. At this time, the user is required to pay more attention to the observation of the road conditions for analysis, and not too much visual resources and cognitive resources Is used for user interface operations, so setting the first time threshold can further ensure the user’s driving safety.
- the user's current operation needs to be immediately interrupted to perform the user
- the interface is switched to ensure user driving safety.
- the user interface switching operation can also be performed after the user completes the current operation, but at the same time, a voice announcement should also prompt the user that there is a driving risk in the current scene, so as to meet the user's needs and ensure the user's driving safety.
- the method for displaying the user interface can set the second preset speed on the basis of the preset scene and the first preset speed, and then can determine the real-time speed according to the scene of the vehicle and the driving speed.
- the user interface that should be displayed, when the user needs a high degree of concentration the car machine can display the first user interface, thereby ensuring the safety of the user.
- the user can obtain other ways to operate the first function in the first user interface without occupying the user's visual resources or cognitive resources, thereby improving the user experience.
- the user interface displays all functions at all times, so that when the user needs to concentrate on driving the vehicle, he also needs to distract the operation of functions that occupy high visual resources and cognitive resources, which makes driving unsafe.
- the embodiment of the present application also provides a vehicle-mounted terminal, which may include: a startup unit, a collection unit, a processing unit, a display unit, and the like. These units can execute the steps in the above-mentioned embodiments to realize the method of displaying the user interface.
- the starting unit is used to support the vehicle-mounted terminal to perform the starting of the vehicle in S401 in FIG. 4.
- the collection unit is used to support the vehicle-mounted terminal to perform the collection of vehicle speed, environment information of the scene in which the vehicle is located, user voice information, etc. by the vehicle machine in S402, S404, and S406 in FIG. 4, etc.
- the processing unit is used to support the vehicle-mounted terminal to execute S402, S404, S406 in FIG. 4, S407 in FIG. 5, and so on.
- the display unit is used to support the in-vehicle terminal to perform the display of the user interface of S403 and S405 in FIG. 4. And/or other processes used in the scheme described herein.
- the embodiment of the present application also provides a vehicle-mounted terminal, including one or more processors; a memory; a touch screen, used to detect touch operations, and a display interface.
- a vehicle-mounted terminal including one or more processors; a memory; a touch screen, used to detect touch operations, and a display interface.
- instructions are stored in the memory, and when the instructions are executed by one or more processors, the vehicle-mounted terminal is caused to execute each step in the above-mentioned embodiment, so as to realize the method of displaying a user interface in the above-mentioned embodiment.
- the processor in the vehicle-mounted terminal may be the processor 210 shown in FIG. 3
- the memory in the vehicle-mounted terminal may be the memory 220 shown in FIG. 3
- the touch screen in the vehicle-mounted terminal may be a combination of the display screen 260 and the touch sensor 290E shown in FIG. 3.
- the chip system includes at least one processor 1601 and at least one interface circuit 1602.
- the processor 1601 and the interface circuit 1602 may be interconnected by wires.
- the interface circuit 1602 may be used to receive signals from other devices (such as the memory of the electronic device 200).
- the interface circuit 1602 may be used to send signals to other devices (such as the processor 1601).
- the interface circuit 1602 can read instructions stored in the memory, and send the instructions to the processor 1601.
- the electronic device can be made to execute the steps executed by the electronic device 200 (for example, a car machine) in the foregoing embodiment.
- the chip system may also include other discrete devices, which are not specifically limited in the embodiment of the present application.
- the embodiments of the present application also provide a computer storage medium, the computer storage medium stores computer instructions, when the computer instructions run on the vehicle-mounted terminal, the vehicle-mounted terminal executes the above-mentioned related method steps to realize the display user interface in the above-mentioned embodiment Methods.
- the embodiments of the present application also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the above-mentioned related steps, so as to realize the method of displaying the user interface in the above-mentioned embodiment.
- the embodiments of the present application also provide a device, which may specifically be a component or a module.
- the device may include a connected processor and a memory; wherein the memory is used to store computer execution instructions.
- the processor When the device is running, the processor The computer-executable instructions stored in the executable memory can be executed to make the device execute the method for displaying the user interface in the foregoing method embodiments.
- the vehicle-mounted terminal, chip, computer storage medium, computer program product or chip provided in the embodiments of the present application are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the above provided The beneficial effects in the corresponding method are not repeated here.
- the disclosed method can be implemented in other ways.
- the above-described embodiments of the vehicle-mounted terminal are only illustrative.
- the division of the modules or units is only a logical function division.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, modules or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor execute all or part of the steps of the method described in each embodiment of the present application.
- the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请提供一种显示用户界面的方法及车载终端,涉及终端技术领域,能够实现车载终端根据车辆所处场景和当前车速,切换显示不同的用户界面,以保证用户行车安全。该方法包括:若车辆处于驾驶状态,车载终端判断车辆是否满足第一预设条件;第一预设条件包括车辆处于预设场景或当前车速大于第一预设速度;若确定车辆满足第一预设条件,车载终端显示第一用户界面,第一用户界面不显示第一功能;或者,第一用户界面中以第一预设方式显示第一功能,第一预设方式用于指示第一功能无法被用户操作。
Description
本申请要求于2019年08月29日提交国家知识产权局、申请号为201910808405.0、发明名称为“一种显示用户界面的方法及车载终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及终端技术领域,尤其涉及一种显示用户界面的方法及车载终端。
随着科技的发展,近年来触控型的车载系统已经逐渐成为各家厂商车辆的标准配置。相对于传统的基于物理控件的车载系统,基于触控的车载系统能够提供更为丰富的功能,例如通话、导航、视频、音乐等。不同于移动交互场景,车载场景中驾驶任务是主要任务,其他任务的都属于次要任务,应在驾驶安全的前提下使用车载系统。但是,由于车载系统的触摸屏无法提供触觉反馈,相对于物理控件,在驾驶中会占用用户更多的视觉资源,影响驾驶安全。
发明内容
本申请提供一种显示用户界面的方法及车载终端,能够实现车载终端根据车辆所处场景和当前车速,切换显示不同的用户界面,以保证用户行车安全。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种显示用户界面的方法,该方法可以包括:若车辆处于驾驶状态,车载终端判断车辆是否满足第一预设条件;第一预设条件包括车辆处于预设场景或当前车速大于第一预设速度;若确定车辆满足第一预设条件,车载终端显示第一用户界面,第一用户界面不显示第一功能;或者,第一用户界面中以第一预设方式显示第一功能,第一预设方式用于指示第一功能无法被用户操作。
其中,第一功能为占用用户较多视觉资源或认知资源的功能。示例性的,第一功能包括如下任意一项或几项:娱乐功能、信息功能、通话功能中的拨号功能、通话功能中的搜索联系人功能、导航功能中的搜索地址功能、音乐功能中的搜索歌曲功能、天气功能中的天气查询功能。
如此,通过设置预设场景和第一预设速度,进而在车辆行驶过程中,车载终端可以判断当前车辆是否处于预设场景或当前车速是否大于第一预设速度,进而确定当前车辆状态是否需要用户高度集中注意力于车辆驾驶。当用户需要高度集中注意力于车辆驾驶时,车载终端触摸屏就会不显示或者显示但不允许用户使用部分占用用户较多视觉资源或认知资源的功能,使用户集中注意力于车辆驾驶,保证用户行驶安全。
在一种可能的实现方式中,第一用户界面中以第二预设方式显示第二功能,第二预设方式用于指示允许用户操作第二功能。
其中,第二功能为占用用户较少视觉资源或认知资源的功能。示例性的,第二功 能包括如下任意一项或几项:通话功能的最近联系人功能、导航功能的推荐地址功能、音乐功能中的最近播放列表功能、天气功能中的本地实时天气功能。
在一种可能的实现方式中,在车载终端显示第一用户界面之后,该方法还包括:若车载终端检测到用户在第一用户界面的第一操作,则车载终端显示第三用户界面。第三用户界面中仅以第一预设方式显示第一功能。其中,第一操作为用户在车载终端触摸屏上的如下任一种操作:点击操作、滑动操作、预设按压操作、预设手势操作。
如此,当当前界面不显示某些功能或显示全部功能但某些功能不能操作时,可以通过一些简单的操作,使得车载终端将所有占用用户视觉资源和认知资源较高的功能集中显示于一个界面,使得用户可以快速锁定想要操作而又被隐藏或不允许操作的功能,减少用户搜索功能而分散的注意力。
在一种可能的实现方式中,预设场景包括以下任一项或任几项:车辆行驶于人流密集路段、车辆行驶于高危路段、车辆行驶于夜间、车辆行驶于恶劣天气条件下、车辆行驶于限速路段。
在一种可能的实现方式中,判断车辆是否处于预设场景,包括:车载终端根据以下至少一项数据判断车辆是否行驶于人流密集路段:车辆行驶状态,车辆周围环境的图像、车辆行驶路线;和/或,车载终端根据以下至少一项数据判断车辆是否行驶于高危路段:车辆行驶路线、车辆行驶状态,车辆周围环境的图像;和/或,车载终端根据以下至少一项数据判断车辆是否行驶于夜间行驶:车辆的环境光线状况,车辆周围环境的图像,实时时间;和/或,车载终端根据以下至少一项数据判断车辆是否行驶于恶劣天气条件下:车辆的环境光线状况,周围环境的图像,天气软件的数据;和/或,车载终端根据以下至少一项数据判断车辆是否行驶于限速路段:车辆周围环境的图像,当前道路信息。
在一种可能的实现方式中,若车载终端检测到用户的第二操作,则车载终端启动语音控制模式;其中,第二操作为操作第一预设按钮或第一功能的图标,用于启动第一功能;语音控制模式用于用户与车载终端进行语音交互。
如此,用户可以通过语音控制模式使用一些占用用户较多视觉资源和认知资源的功能,而语音控制操作不会占用和用户过多的注意力,因此,可以在保障用户行车安全的前提下,满足用户的使用需求,提高用户体验。
在一种可能的实现方式中,车载终端启动语音控制模式,具体包括:车载终端启动语音交互功能;车载终端根据语音交互功能提示用户可以通过第三操作操作第一功能,并告知用户第三操作的操作方法;其中,第三操作为语音交互形式的操作。
在一种可能的实现方式中,第一预设条件还包括:车辆处于预设场景,且当前车速大于第二预设速度;其中,第二预设速度为预设场景对应的最小限速。
如此,通过设定预设场景的限速,可以在车辆处于预设场景而当前车速较慢时,而不必隐藏或禁用第一功能,在保证用户行车安全的前提下,进一步提高用户体验。
在一种可能的实现方式中,若确定车辆未满足第一预设条件,则车载终端显示第二用户界面,第二用户界面中以第二预设方式显示第一功能和第二功能,第二预设方式用于表示允许用户操作第一功能和第二功能。
第二方面,本申请实施例提供一种车载终端,该电子设备可以为实现上述第一方面方法的装置。该电子设备可以包括:一个或多个处理器;存储器,所述存储器中存储有指令;触摸屏,用于检测触摸操作,以及显示界面;当指令被一个或多个处理器执行时,使得电子设备执行第一方面的显示用户界面的方法。
第三方面,本申请提供一种车载终端,该车载终端具有实现上述第一方面任一项的显示用户界面的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第四方面,本申请实施例提供一种计算机存储介质,包括计算机指令,当计算机指令在车载终端上运行时,使得车载终端执行如第一方面及其中任一种可能的实现方式中所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面中及其中任一种可能的实现方式中所述的方法。
第六方面,提供一种电路系统,电路系统包括处理电路,处理电路被配置为执行如上述第一方面中任一项的显示用户界面的方法。
第七方面,本申请实施例提供一种芯片系统,包括至少一个处理器和至少一个接口电路,至少一个接口电路用于执行收发功能,并将指令发送给至少一个处理器,当至少一个处理器执行指令时,至少一个处理器执行如第一方面及其任一可能的实现方式中显示用户界面的方法。
图1为本申请实施例提供的通信系统的结构示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3为本申请实施例提供的又一种电子设备的结构示意图;
图4为本申请实施例提供的一种显示用户界面的方法的流程示意图;
图5为本申请实施例提供的一种显示用户界面的方法的流程示意图;
图6为本申请实施例提供的一种显示用户界面的示意图;
图7为本申请实施例提供的又一种显示用户界面的示意图;
图8为本申请实施例提供的又一种显示用户界面的示意图;
图9为本申请实施例提供的又一种显示用户界面的示意图;
图10为本申请实施例提供的又一种显示用户界面的示意图;
图11为本申请实施例提供的又一种显示用户界面的示意图;
图12为本申请实施例提供的又一种显示用户界面的示意图;
图13为本申请实施例提供的又一种显示用户界面的示意图;
图14为本申请实施例提供的又一种显示用户界面的示意图;
图15为本申请实施例提供的又一种显示用户界面的示意图;
图16为本申请实施例提供的一种芯片系统的结构示意图。
下面结合附图对本申请实施例提供的一种显示用户界面的方法及车载终端进行详细地描述。
本申请的说明书以及附图中的术语“第一”和“第二”等是用于区别不同的 对象,或者用于区别对同一对象的不同处理,而不是用于描述对象的特定顺序。
此外,本申请的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括其他没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请的描述中,除非另有说明,“多个”的含义是指两个或两个以上。本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
车载终端(又可描述为车机)大多安装在汽车中控台里,能够实现人与车、车与外界(车与车)的通信。
如图1中的(a)所示,在本申请的一些实施例中,车载终端200可以通过有线或无线的方式与电子设备100建立通信连接。
其中,本申请中的电子设备100例如可以为手机、平板电脑、个人计算机(personal computer,PC)、个人数字助理(personal digital assistant,PDA)、智能手表、上网本、可穿戴电子设备、增强现实技术(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、车载设备、智能汽车、智能音响、机器人等,本申请对该电子设备的具体形式不做特殊限制。
示例性的,车机200和电子设备100例如可以采用mirrorlink标准进行互联,从而实现对特定应用软件的电子设备100和车机200的双向控制。这样,用户在汽车行驶过程中,不用看着电子设备100的屏幕、触摸电子设备100的屏幕或操作电子设备100的物理按键,只需要用车机200上的物理按键、触摸车机200屏幕上的控件或语音命令来控制电子设备200,包括接听/拨打电话、听手机音乐、用手机导航等等,当然此时手机本身也具有可操作性。
此时,车机200可以接收电子设备100发送的数据,其中电子设备100发送的数据包括但不限于:通过摄像装置采集到的车内或车外的图像信息等;通过内置拾音装置检测到的用户语音以及周围环境中的其他声音等;通过传感器等检测到的数据等。在本申请中,车机200可以根据自身的数据以及从电子设备100上获取的数据进行判断,确定是否处于预设的场景中,比如恶劣天气或者交通拥堵路段等场景,使得车机200可以执行相应的操作,如显示相应的界面或者语音播报提示。
如图1中的(b)所示,在本申请的另一些实施例中,车机200可以不与电子设备100建立通信连接。此时,车机200可以通过触摸屏检测到用户的触摸操作;检测到用户在方向盘上的对物理按键的操作;车机200可以利用车辆自身的摄像装置及传感装置获取数据,进而判断车辆是否处在预设场景中,使得车机200可以执行相应的操 作,如显示相应的界面或者语音播报提示。
图2示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
在本申请实施例中,应用处理器可以根据例如从传感器处获取的数据(例如加速度传感器180E获取的加速度等)等,确定用户是否处于驾驶状态,以及用户是否处于驾驶中需要高度集中注意力的场景中。从而确定是否需要禁用电子设备200的部分功能或者利用语音控制电子设备200的部分功能。可以理解的是,其中一些部分或全部的数据处理工作也可以由GPU或NPU等负责,本申请实施例对此不做限定。
例如:电子设备100通过摄像头195捕获车辆周围环境的图像,或者电子设备100接收电子设备200发送的车辆周围环境的图像,可以调用电子设备100的GPU或NPU进行图像分析,以确定车辆是否处于高危路段,或者当前的天气情况如何等。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术 (infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序指令,所述可执行程序指令包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口, 美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。
例如,图3示出了电子设备200的结构示意图。
电子设备200可以包括处理器210,存储器220,无线通信模块230,扬声器240,麦克风250,显示屏260,摄像头270,USB接口280,以及传感器模块290等。其中传感器模块290可以包括压力传感器290A,磁传感器290B,加速度传感器290C,温度传感器290D,触摸传感器290E,环境光传感器290F,定位系统290G、雨量传感器290H等。例如:电子设备200还可以根据加速度传感器290C的数据确定车辆是否处于运动状态,即用户是否处于驾驶状态。又例如:电子设备200可以根据雨量传感器290H的数据,确定车辆是否处于雷雨天气中,或者,电子设备200可以将雨量传感器290H的数据发送给电子设备100,以便电子设备100确定车辆是否处于雷雨天气等。又例如:电子设备200可以根据环境光传感器290F确定车辆是否与浓雾、黑暗中。或者,电子设备200可以将环境光传感器290F的数据发送给电子设备100,以便电子设备100确定车辆是否处于浓雾、黑暗等环境中。
处理器210可以包括一个或多个处理单元,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
在本申请的一些实施例中,电子设备200可以获取电子设备200或电子设备100上的例如传感器数据对用户的状态以及所处的场景进行判断,以确定显示相应的用户界面。
在本申请的又一些实施例中,电子设备200上可以运行多个应用程序(例如音乐播放器、导航应用、通话应用、通知应用等),这多个应用程序可以直接与应用服务器进行通信,例如:接收应用服务器发送的新消息或来电提醒,将在用户界面进行的相应的操作发送到应用服务器。电子设备200可以获取例如一个或多个传感器的数据或者接收电子设备100发送的数据,对用户的状态以及所处的场景进行判断,以确定是否显示禁用部分功能和/或操作后的用户界面。
存储器220可以用于存储计算机可执行程序指令。内部存储器220可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备200使用 过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器220可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器210通过运行存储在存储器220的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备200的各种功能应用以及数据处理。
无线通信模块230可以提供应用在电子设备200上的包括WLAN,如Wi-Fi网络,蓝牙,NFC,IR等无线通信的解决方案。无线通信模块230可以是集成至少一个通信处理模块的一个或多个器件。
在本申请的一些实施例中,电子设备200可以通过无线通信模块230,与电子设备100建立无线通信连接。或者,电子设备200也可以通过USB接口280,与电子设备100建立有线通信连接,本申请实施例对此不做限定。
在本申请的一些实施例中,电子设备200可以通过摄像头270,捕获车辆周围环境的图像,以便电子设备200或者电子设备100进行图像分析,确定用户当前所处的场景,例如:车辆是否行驶在高危路段等。
扬声器240、麦克风250、显示屏260等部件的作用可参考电子设备100中相关的描述,这里不做赘述。
可以理解的是,本发明实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
下面以电子设备100是手机,电子设备200是车机为例,对本申请实施例提供的技术方案进行详细说明。
在现有技术中,触控型的车载系统(车机)已经逐渐成为各家厂商车辆的标准配置,但是,由于车载系统的触摸屏无法提供触觉反馈,在驾驶中会占用用户较多的视觉资源,使得用户不能专注于驾驶,影响驾驶安全。此外,用户在驾驶时,也可以选择将手机连接到车机,车机与手机的功能的交互,进一步丰富车载系统的使用。在驾驶车辆的过程中,用户可能会关注车载系统的显示屏,以处理手机接到的通知消息或者以此使用手机更为丰富的娱乐功能。然而,当用户驾驶车辆高速行驶或者处于一些特定的场景下时,例如用户驾驶车辆处于人流密集的路段、用户驾驶车辆经过高危路段、用户在夜间驾驶车辆、用户在恶劣天气驾驶车辆,或者驾驶车辆处于限速路段等场景时,用户应该将更多的精力用于专注驾驶,而不能分散注意力。即,在一些情况下,应该提供一种能够减少车机显示屏所占用的视觉资源及认知资源的用户界面,保障行车安全。
为此,本申请实施例提供了一种显示用户界面的方法,在车机开始工作后,可以自动识别出车辆所处的各种场景,基于不同场景显示不同的用户界面,当车辆所处的场景变换时,可以根据变换的场景显示相应的用户界面。其中,显示的用户界面可以包括第一用户界面和第二用户界面。第一用户界面为不显示部分功能或者显示但不能操作部分功能的用户界面,第二用户界面为显示包含所有功能的用户界面。上述功能包含一些功能和一些功能的某些操作,其中,一些功能可以是以手机软件(application, APP)的方式来呈现。
例如:识别出用户未在驾驶时,车机的触摸屏可以显示第二用户界面。
再例如:识别出用户在驾驶,但处于某些可以不必过于集中注意力的场景时或者说处于某些允许用户进行一些其他稍占用视觉资源或认知资源操作的场景时,即用户处于不必高度集中注意力的场景时,那么车机的触摸屏可以显示第二用户界面。
又例如:识别出用户在驾驶,且处于某些需要高度集中注意力的场景时,车机触摸屏显示第一用户界面。
其中,本申请实施例中显示用户界面的方法可以预配置在车载终端中,使得所有车载终端均采用该方法显示用户界面;也可以根据用户选择,在用户设置后车载终端不采用该显示用户界面的方法。
其中,上述需要高度集中注意力的特定场景例如是用户驾驶车辆高速行驶(可以根据车辆性能确定第一预设速度,判定实时车速大于第一预设速度时为高速行驶),用户驾驶车辆行驶在处于人流密集的路段(例如:商业中心、旅游景点、学校路段等)时,用户驾驶车辆经过高危路段(例如:连续转弯、高速路、隧道等)时、用户驾驶车辆处于夜间(例如:可以根据太阳落山时间确定、根据光照强度确定等)行驶状态,用户在恶劣天气(例如:浓雾、冰雪天气、雷雨天气等)下驾驶车辆、用户在限速路段(例如:下坡路段、村庄路段、道路交汇路口等)驾驶车辆时等。
有的时候车机并不能准确识别用户此时所处的场景,在手机连接到车机的情况下,也可以结合手机与车机的识别结果,进而更加准确的识别出用户所处场景。比如识别出用户在驾驶,但未处于需要高度集中注意力的场景时,车机的触摸屏可以显示包含所有功能的用户界面。在识别出用户处于需要高度集中注意力的场景时,车机触摸屏显示隐藏或者不允许操作部分功能和/或操作后的用户界面。
在一些实施例中,可能用户开始处于不需要高度集中注意力的场景,并且在车机触摸屏的第二用户界面上操作某些功能,而在操作的过程中,用户进入到了需要高度集中注意力的场景,此时,车机会立即中断当前操作,切换显示第一用户界面,以保证驾驶安全。或者,可以在用户完成当前操作后再进行切换用户界面,以保证用户使用的体验感。对此,本申请实施例并不做具体限定。
如图4所示,为本申请实施例提供的一种切换用户界面的方法的流程图,具体如下:
S401、车机开始工作。
具体的,当用户启动车辆开启驾驶模式后,车机开始显示触摸屏的用户界面,此时判定车机开始工作。一般的,此时的用户界面为默认主页面为即显示包含全部功能的第二用户界面(参见图6所示)。
需要说明的是,本申请实施例并不限制手机是否连接到车机,即手机可以连接到车机,也可以并未与车机连接。此步骤仅考虑车机是否开始工作,而不必考虑手机与车机的连接状况。
在手机未连接到车机的情况下,由车机对场景进行识别。在手机连接到车机的情况下,手机的数据可以与车机进行同步,由车机结合手机采集的数据对用户所处场景进行识别,而后更加精准的显示与该场景相应的用户界面。换言之,车机可以承担手 机全部或部分的数据处理工作,本申请实施例对车机和手机的分工不做限定。
以下步骤,是以车机承担对用户状态和场景进行判断,确定车机触摸屏显示的用户界面为例进行说明的。
S402、车机确定车辆是否处于驾驶状态。若不处于驾驶状态(即处于未被驾驶状态),则执行步骤S403。若处于驾驶状态,则执行步骤S404。
在一种可能的实现方式中,可以获取车机测得的车辆的时速等信息,确定车辆是否处于行进中,从而确定车辆是否处于驾驶状态。也可以通过车机的全球定位系统(global positioning system,GPS)确定出车机的实时位置,根据实时位置计算出车机的移动速度。而后,判断车机的移动速度是否大于阈值。若车机的移动速度大于阈值,可认为车辆处于驾驶状态。若车机的移动速度不大于阈值,可认为车辆不处于驾驶状态。
可选的,在手机连接车机的情况下,车机也可以获取手机的数据,通过手机内设置的传感器(例如:加速度传感器)来获取手机的移动速度,从而确定车辆是否处于被驾驶状态。
S403、车机触摸屏显示第二用户界面。
具体的,当确定车辆不处于驾驶状态,或者说,此时使用该车辆的用户处于不必高度集中注意力的时候,认为用户此时可以方便的操作车机,那么车机触摸屏就可以在。第二用户界面中以第二预设方式显示第一功能和第二功能,第二预设方式用于表示允许用户操作第一功能和第二功能。第一功能和第二功能构成了车机触摸屏上显示的全部功能,即此时的第二用户界面显示所有的功能供用户选择使用。
如图6所示,示例性的给出了一种触控型车机的用户界面,该用户界面包含显示于左侧的主菜单601和显示于右侧的显示界面602。其中,主菜单601中显示有车机包含的主要功能,显示界面602中的显示内容对应于左侧主菜单601中的每一功能分别显示不同的用户界面。图6中所示的用户界面的主菜单601中示例性的给出了主页、导航、通话、音乐和车控五种功能。一般的,触摸屏会将所有的常用功能显示于主页界面(即选择主菜单601上的“主页”功能后对应显示的显示界面602),如图6,示例性的给出了当选择主菜单601上的“主页”这一功能后,显示界面602对应显示的用户界面,即主页界面包含了的地图、微信、联系人、电话、娱乐、天气六种常用功能。
可以理解的是车机应该还可以包含其他主要或者常用应用功能可以显示于车机触摸屏,并且,基于触摸操作,可以切换当前的主页界面,以显示更多的常用功能,对此可以参见现有技术,本申请实施例不做具体说明。
之后,用户可以根据需求一步步点击相应的功能进入子菜单,实现最终功能。比如,用户想要利用车机进行通话操作,可以触摸电话图标,进而进入通话界面,之后,可以进行拨号或者电话查询操作,进而实现电话拨打。或者如图7所示,用户通过点击屏幕左侧主菜单601界面上的通话功能,显示界面602对应显示通话界面,在该通话界面中,可以向用户提供拨号操作、联系人搜索和最近联系人列表浏览(此界面在图7中并未示出,可以理解的是可以通过相关操作进入的该功能界面)等 操作。
S404、车机确定车辆处于预设场景或当前车速大于第一预设速度。若车机确定车辆处于预设场景或当前车速大于第一预设速度,则执行步骤S405,若车机确定车辆未处于预设场景且当前车速小于等于第一预设速度,则执行步骤S403。
具体的,可以将车辆处于预设场景或当前车速大于第一预设速度确定为第一预设条件。进而判断当车辆满足第一预设条件时,车机触摸屏显示第一用户界面。
其中,预设场景例如可以是:用户驾驶车辆行驶在处于人流密集的路段(例如:商业中心、旅游景点、学校路段等)时,用户驾驶车辆经过高危路段(例如:连续转弯、高速路、隧道等)时、用户驾驶车辆处于夜间(例如:可以根据太阳落山时间确定、根据光照强度确定等)行驶状态,用户在恶劣天气(例如:浓雾、冰雪天气、雷雨天气等)下驾驶车辆、用户在限速路段(例如:下坡路段、村庄路段、道路交汇路口等)驾驶车辆时等。
对于当前场景是否为上述预设场景,可以为车机根据传感数据或者其它方式进行分析判断得出的结论,具体示例性的给出的五种预设场景的判断方法,详见下文具体描述。
预设场景一:车辆处于人流密集的路段。
在判断用户是否驾驶车辆行驶在人流密集的路段时,可以通过车机和/或手机内的传感器数据来分析。其中,车机至少根据以下至少一项数据判断车辆是否处于人流密集路段:车辆的行驶状态,车辆周围环境的图像或者行驶路线。例如:可以根据车辆的行驶时速确定车辆是否位于人流密集路段。还可以根据车辆的位置,确定车辆是否正在商业中心等人流密集处。
可选的,由于车辆应该具备实时获取车速的能力,车机可以获取一段时间内车速的变化,如果在预设时间段内车辆时速在反复变化,且车速均较小,即车辆在预设时间段内反复制动,则可以确定车辆处于上述人流密集的路段导致车辆不能顺利行驶。
可选的,由于导航软件中包含有各个路段的道路信息,例如:某某商业中心路段、某某旅游景点、学校路段等,而一般这些路段人流都会比较密集。因此,根据车机和/或手机确定出车辆当前的位置,根据当前位置和导航软件中该位置的道路信息,确定车辆是否处于上述的人流密集的路段。
可选的,可以通过车机内的摄像头或车机连接的摄像头,根据摄像头拍摄的车辆周围环境的图像,采用图像分析技术,对车辆外部的环境进行识别,进而确定出车辆是否处于人流密集的路段。其中,可以预设一个数值,比如当车辆周围人数超过10个时,判断车辆处于人流密集的路段。
预设场景二:车辆正在经过高危路段。
在判断用户是否驾驶车辆正在经过高危路段时,可以通过车机和/或手机内的传感器数据来分析。其中,车机至少根据以下至少一项数据判断车辆是否处于高危路段:车辆的行驶路线,行驶状态或者车辆周围环境的图像。例如:可以根据车辆的行驶路线,确定车辆是否正在连续转弯。还可以根据车辆的行驶时速确定车辆是否位于高速路上。还可以根据车辆的行驶路线和行驶时速,确定车辆是否在湿滑路面行驶等。还可以通过光线传感器判断车辆周围的光线,进而确定是否位于隧道中等。
可选的,还可以通过车机内的摄像头或车机连接的摄像头,根据摄像头拍摄的车辆周围环境的图像,采用图像分析技术,对车辆外部的环境进行识别,进而确定出车辆是否处于上述的高危路段中。
可选的,由于导航软件中包含有各个路段的路况数据(例如:急转弯、隧道等),所以还可以根据手机或车机确定出车辆当前的位置,根据当前位置和导航软件中该位置的路况数据,确定车辆是否处于上述的高危路段。
预设场景三:车辆处于夜间行驶状态。
在判断用户驾驶车辆是否处于夜间行驶状态时,可以通过车机和/或手机内的传感器数据来分析。其中,车机至少根据以下至少一项数据判断车辆是否处于夜间行驶状态:车辆所处的环境光线状况,车辆周围环境的图像或者实时时间。例如:当光照强度低于阈值持续时间超过阈值时,判定用户处于夜间行驶状态。或者,在太阳落山一小时后到太阳升起前一小时这段时间内用户驾驶车辆行驶,则判定用户处于夜间行驶状态。
可选的,可以根据车机和/或手机的摄像头拍摄的周围环境的图像,采用图像分析技术,进而识别出车辆所处环境的光照强度,判断用户是否处于夜间驾驶状态。
可选的,可以根据车机的光线传感器判断车辆周围的光线,并且可以进一步的结合光线持续时间,进而更加准确的确定用户是否处于夜间驾驶状态。
可选的,可以根据当前时段及位置坐标的太阳落山时间后为判断夜间行驶的开始时间,根据太阳升起时间之前为判断夜间行驶的结束时间。比如,某地当前季节太阳升起时间为7:00,太阳落山时间为18:00,则可以定义夜间行驶时段为19:00-次日6:00,用户在这一时段驾驶则为夜间驾驶。
预设场景四:当前天气条件恶劣。
在判断用户是否在恶劣天气下驾驶车辆时,也可以通过手机和/或车机内的传感器数据来分析。其中,车机至少根据以下至少一项数据判断车辆是否行驶于恶劣天气条件下:车辆所处的环境光线状况,周围环境的图像或者天气软件的数据。例如:可以根据车机内的光线传感器等确定车辆周围环境,例如车辆处于浓雾、黑暗中。还可以根据车机内的雨量传感器,确定车辆是否处于雷雨天气中等。
可选的,还可以通过车机内连接的摄像头,根据摄像头拍摄的车辆周围环境的图像,采用图像分析技术,对车辆外部的环境进行识别,进而确定出车辆是否处于上述的恶劣天气中。
可选的,也可以根据通过天气应用软件获取当前的天气情况,进而判断车辆是否处于上述恶劣天气中。
预设场景五:车辆处于限速路段。
在判断用户是否处于限速路段驾驶车辆时,车机至少根据以下至少一项数据进行判断:车辆周围环境的图像,当前道路信息。比如可以通过车机和/或手机内导航软件,确定当前道路信息,进而确定用户是否处于限速路段。一般的特殊路段,比如:下坡路段、村庄路段、道路交汇路口等都会进行车辆限速进而保证行车安全,而导航软件中都会包含特殊路段信息,进而车机可以根据用户车辆的行驶位置确认用户是否处于限速路段。
可选的,也可以通过车机内连接的摄像头,根据摄像头拍摄的车辆周围路段设置的限速标识,采用图像分析技术,确定出车辆是否处于限速路段上。
车机可以根据上述方法获得相应的数据进而判断车辆是否处于预设场景或者处于哪一种或几种预设场景中,进而判断此时用户是否需要高度集中注意力。
车速为影响用户注意力的较大的一种因素,当车速较快时,用户必须集中注意力在车辆驾驶上,因此,设置第一预设速度,用于判断当前车辆是否处于高速驾驶状态。
第一预设速度为根据车辆性能确定或为直接预设置的阈值。在判断用户是否在驾驶车辆高速行驶时,车机可以实时获取测得的车辆的时速等信息,比较实时车速与第一预设速度,从而确定用户是否在驾驶车辆高速行驶。比如,可以设定第一预设速度为60km/h,当用户驾驶车辆的时速大于60km/h时,则认为用户在高速驾驶车辆行驶。或者,当前用户所用车辆性能比较好,可以将第一预设速度设置为65km/h;或者,当前使用车辆的用户为新手司机或该用户年龄较大,可以设置较低的第一预设速度,比如50km/h。当然,还可以根据其他情况设置第一预设速度,对此本申请实施例不做具体限定。
可选的,也可以通过车机的GPS确定出车机的实时位置,根据实时位置计算出车机的移动速度。而后,判断车机的移动速度是否大于第一预设速度。若车机的移动速度大于第一预设限速,可认为用户在驾驶车辆高速行驶。若车机的移动速度不大于第一预设速度,可认为用户不在驾驶车辆高速行驶。
可选的,在手机连接车机的情况下,车机也可以获取手机的数据,通过手机内设置的传感器(例如:加速度传感器)来获取手机的移动速度,从而确定用户是否在驾驶车辆高速行驶。
需要说明的是,在步骤S404的一种实现方式中,车机先对用户驾驶车辆是否处于预设场景进行判断。若处于预设场景则可直接执行步骤S405,即当车辆处于预设场景中时,用户要集中注意力于车辆驾驶,而不能过多的分散注意力操作车机。如果没有处于预设场景,则进一步的对当前车速是否大于第一预设速度即车辆是否处于高速驾驶状态进行判断。如果当前车速大于第一预设速度,则执行步骤S405,即使车辆没有处于预设场景,但当车辆处于高速驾驶状态的时候用户就要高度集中注意力于车辆驾驶。如果当前车速小于等于第一预设速度,则执行步骤S403,即只有当车辆未处于预设场景且当前车辆未处于高速驾驶状态的时候,车机才可以显示全部功能的用户界面以供用户使用。
在步骤S404的另一种实现方式中,车机先对当前车速是否大于第一预设速度进行判断。若当前车速大于第一预设速度,即当前车辆处于高速驾驶状态,则直接执行步骤S405,即当车辆处于高速驾驶状态的时候,用户必须高度集中注意力于车辆驾驶。若当前车速小于等于第一预设速度,则进一步对车辆是否处于预设场景进行判断。如上所述,如果处于预设场景,则执行步骤S405。如果没有处于预设场景,即当车辆未处于预设场景且当前车辆未处于高速驾驶状态,执行步骤S403。
S405、车机触摸屏显示第一用户界面。
具体的,第一用户界面不显示第一功能;或者,第一用户界面中以第一预设方式显示第一功能,第一预设方式用于指示第一功能无法被用户操作。并且,第一用户界 面中以第二预设方式显示第二功能,第二预设方式用于指示允许用户操作第二功能。其中,第一功能为操作时会占用用户较多视觉资源和认知资源的功能,第二功能为除第一功能以外的功能,第二功能占用用户的视觉资源或认知资源较少。此外,本申请实施例中的功能指的是某些功能或者某些功能的某些操作。该功能可以以电子设备上安装的APP的方式呈现,该操作可以是电子设备上安装的APP所能提供的操作选项。
其中,占用用户较多视觉资源和认知资源的功能指的是需要用户进一步思考并需要视觉参与后才可以进行的操作。比如:在拨号时,需要用户首先在脑海中回忆需要拨打的号码,并且需要利用视觉资源在触摸屏上按顺序点击相应的数字并确保点击正确,才能实现正确的拨号操作,这样就会占用用户大量的视觉资源和认知资源。又如:使用导航功能时,用户可能会进行地点搜索,此时,就必须利用视觉资源在触摸屏上输入地点,且要保证输入的正确性,有的时候导航功能会推荐几条路线,此时用户还需要进一步思考以确认最终的路线,这样就势必占用用户较多的视觉资源和认知资源。由于视觉资源和认知资源的占用会转移用户的大量注意力,使得用户不能集中注意力于驾驶车辆。因此,当车辆处于预设场景或者车速较快时,就必须集中注意力驾驶车辆才能保证行驶安全,车机需要将这些占用用户较多视觉资源和认知资源的第一功能隐藏或者使得用户无法操作第一功能,以保证行车安全。
那么,上述第一功能包括如下任意一项或几项:娱乐功能、信息功能、通话功能中的拨号功能、通话功能中的搜索联系人功能、导航功能中的搜索地址功能、音乐功能中的搜索歌曲功能、天气功能中的天气查询功能。第二功能包括如下任意一项或几项:通话功能的最近联系人功能、导航功能的推荐地址功能、音乐功能中的最近播放列表功能、天气功能中的本地实时天气功能。
需要说明的是,上述第一功能和第二功能仅仅是示例性的说明,第一功能和第二功能还可以包括其他功能。此外,上述第一功能和第二功能的划分,可以为厂商根据经验预配置的,也可以是车机根据用户的使用情况确定的,对此本申请实施例不做具体限定。
在一种可能的实现方式中,如图8所示,示例性的给出了两种第一用户界面。在用户选择主菜单601上的“主页”功能后显示界面602上对应的6个常用功能中,微信功能和娱乐功能使用时可能会占用用户较多的视觉资源或认知资源,比如,回复微信信息时,需要将视觉集中在触摸屏上,进行阅读或打字操作,进而无法关注前方驾驶情况,在预设场景中或者车速较快时会加大危险的发生比例。所以将微信功能和娱乐功能确定为第一功能,将地图功能,联系人功能,电话功能,天气功能确定为第二功能。因此,如图8中的(a)所示,第一用户界面不显示第一功能,只显示第二功能。即此时将第一功能隐藏,使得用户无法操作第一功能,但可以操作第二功能。或者,如图8中的(b)所示,在主屏幕上用画叉的方式使得第一功能无法被用户操作,但正常显示允许用户操作的第二功能。当然,除了画叉的方式之外,还可以采用图标变灰(点击后无法使用该功能)的方式使得第一功能无法被用户操作,本申请实施例对此不做具体限定。
在一种可能的实现方式中,如图9所示,示例性的又给出了一种第一用户界 面,该第一用户界面中的第一功能为通话功能的部分操作。在车机触摸屏上有些功能的部分操作会占用用户较多的视觉资源或认知资源,但此类功能又为用户常用或必须使用的功能,并不能采用上述方法将此类功能完全禁用(隐藏或不允许操作)。因此,可以将此类功能中占用用户较多的视觉资源或认知资源的操作禁用,以使用户集中注意力于车辆驾驶。比如,通话功能为一种必不可少的功能,为了避免错过重要联络讯息,车机不能直接完全的禁用该功能,但是为了保证用户行车安全,可以禁用该功能的部分操作。参见图9所示,相较于图7所示的通话界面,图9所示的通话界面中仅保留通话记录界面,而不显示拨号及联系人搜索功能,进而用户可以仅点击一次,就可以给最近通话的联系人回拨电话。实现通话联络而又不必占用较高的视觉资源或认知资源。又比如,导航功能为用户驾驶车辆的必备功能,也不能全面禁止使用该功能。因此,车机在导航界面仅提供地址推荐,用户通过点击相应的地址,获取一条最佳行驶路线,进而实现导航功能。禁止通过触摸屏搜索地址,从而保证用户的行车安全。
由此可见,本申请提供的显示用户界面方法,可以通过设置预设场景和第一预设速度,进而在车辆行驶过程中,车机可以判断当前车辆是否处于预设场景或当前车速是否大于第一预设速度,若确定车辆处于预设场景或当前车速大于第一预设速度,则车机确定显示第一用户界面,即在需要用户高度集中注意力于驾驶车辆的时候,车机可以通过不显示或者不允许用户操作占用用户较多视觉资源或认知资源的功能和/或操作的方式,保证用户行驶安全。以改善现有技术中,用户界面时刻显示全部功能,使得用户在需要高度集中注意力驾驶车辆时还需要分心操作占用视觉资源和认知资源较高的功能,而使得行驶不安全的情况。
在一种可能的实现方式中,车机在第一用户界面,可以为用户提供一个操作入口,以使用户可以通过其他占用用户视觉资源或认知资源较少的方式操作第一功能。比如,可以设置第一预设按钮为用户提供上述操作入口,并且利用第一预设按钮结合车机或者手机的语音功能引导用户进行相关的操作,以提高用户体验。示例性的,参见图10所示,在图8中的(b)显示的用户界面的基础上增加第一预设按钮。参见图11所示,在图9显示的用户界面的基础上增加第一预设按钮。第一预设按钮为用户提供了这类高资源占用型(占用用户较多视觉资源和认知资源)的功能或操作的入口,使得用户可以使用禁用的功能或操作。用户可以通过点击、拖动或者其他类型动作操作该第一操作预设按钮,当屏幕检测到用户对该第一操作预设按钮的相关操作时,表明用户想使用这类高资源占用型的功能或操作,此时,车机则会执行步骤S406,即可通过语音播报的形式引导用户操作这些功能。
需要说明的是,第一预设按钮可以由车机设置于第一用户界面,还可以设置显示于第二用户界面。以便车机触摸屏显示所有功能时也可以为用户提供其他占用用户视觉资源或认知资源较少的方式操作车机触摸屏。并且为了更加方便用户操作,第一预设按钮可以配置为悬浮按钮。对于第一预设按钮显示的用户界面及显示形态本申请实施例不做具体限定。
可选的,也可以不设置第一预设按钮,而是将以第一预设方式显示的第一功 能结合车机或者手机的语音功能引导用户进行相关的操作。即在第一用户界面检测到用户对第一功能图标的操作时就可以进入语音控制模式。由于第一用户界面可以为图8中的(a)所示的不显示第一功能的用户界面,或者为图8中的(b)所示的以第一预设方式显示第一功能并以第二预设方式显示第二功能的用户界面。此时,用户无法直接锁定第一功能或者需要在全部功能中挑选所需的第一功能,那么第一功能的选择会占用用户一定的注意力。因此,车机触摸屏可以在第一用户界面检测用户的第一操作,当检测到用户的第一操作后,如图12所示,车机触摸屏显示第三用户界面,该第三用户界面中通过第一预设方式显示全部的第一功能,并且在当前第三用户界面仅显示第一功能以便用户可以快速锁定需要的功能。其中,第一操作为用户在车机的触摸屏上的如下任一种的双击或滑动操作等:点击操作、滑动操作、预设按压手势操作、预设手势。其中,点击操作可以为用户在第一用户界面非功能图标显示区域处的单击,双击或多次连击操作。滑动操作为与现有车机触摸屏切换显示屏幕的方式区分的纵向或斜向滑动操作,或者为多指滑动操作等。预设按压手势操作为用户根据使用习惯设置的按压手势,比如,单指或多指按压等。预设手势操作为用户根据使用习惯设置的手势,比如,“画圆”等。本申请实施例对第一操作不做具体限定。
通过在车机触摸屏增加检测用户对第一预设按钮或第一功能图标的第二操作的检测,使得车机可以满足用户在当前车辆状态使用某些不允许使用的功能或操作的需求,在保证安全驾驶的前提下,提高了用户体验。
S406、进入语音控制模式。
具体的,当车机触摸屏检测到用户的第二操作后,车机启动语音交互功能。其中,第二操作为操作第一预设按钮或第一功能的图标,用于启动第一功能。语音控制模式用于用户与车机进行语音交互。车机根据语音交互功能提示用户可以通过第三操作操作第一功能,并告知用户第三操作的操作方法。其中,第三操作为语音交互形式的操作。
比如,用户操作图10所示的第一用户界面上的第一预设按钮或第一功能的图标后,进入语音控制模式,并显示图13或图15所示的用户界面。又如,用户操作如图12所示的第三用户界面上的第一功能图标后,进入语音控制模式,并显示图13或图15所示的用户界面。又如,用户操作图11所示的用户界面上的第一预设按钮后,进入语音控制模式,并显示图14所示的用户界面。
可选的,第一功能为某些功能时,参见图13所示,通过语音控制模式操作第一功能。用户在菜单栏601选择“主页”功能后,在显示界面602对应显示的用户界面上操作第一预设按钮进入语音控制模式,此时,车机会语音播报“当前操作存在风险”以提示用户这类功能会提高驾驶风险。车机继续语音播报“请问需要什么帮助”,以询问用户此时需要执行什么操作。用户可以回答想要执行的操作,比如参见图13所示,用户想要查看微信(信息)消息,就可以回答“查看最新微信消息”。由于车机可以不需要用户参与操作就可以将文字转化为语音进行播报,进而可以实现该操作,故判断当前用户想要使用的微信功能可以通过语音控制实现,即判断用户可以使用微信功能。因此车机可以语音播报“已打开微信”。 之后,播放用户要求查看的最新微信消息“妈妈消息“什么时候到家””。此时,用户可以直接通过语音控制车机执行想要进行的操控,比如利用车机进行信息回复,可以直接语音命令车机“回复“半小时””。最终车机接收到该命令,并语音播报回复内容“已回复“半小时””,使用户确认最终结果。如果不正确,则用户需要语音控制车机重新输入相关内容。比如,用户可以命令车机“当前消息错误”,“重新回复“半小时””,以便于车机执行正确操作。如果正确,则用户不必再进行操作。或者用户也可以通过在图10或图12所示的用户界面,操作第一功能的图标进而直接进入语音控制模式。通过上述操作,可使用户通过其他占用用户视觉资源或认知资源较少的方式操作第一功能,在保证用户行车安全的同时提高用户体验。
可选的,第一功能为某些功能的某些操作时,参见图14所示,通过语音控制模式操作第一功能。当用户在如图11所示的通话界面发现最近的通话记录中并没有想要联系的联系人,比如“妈妈”,也无法进行拨号操作,以至于无法拨打电话。那么用户就可以操作图11中的第一预设按钮,以出现图14所示的可以进行语音控制的用户界面。此时,车机会进行语音播报“请问需要什么帮助”。用户回答当前想要进行的操作,比如“打电话给妈妈”。车机对当前用户想要进行的操作进行判断是否可以通过语音操作完成。拨打电话这一操作,车机可以直接完成而不必用户手动控制。因此,车机可以进行拨号操作同时语音播报“正在拨打妈妈的住宅电话”,以告知用户当前所需操作正在执行。在保证行驶安全的同时满足客户需求。
可选的,当用户执行第二操作以启动语音控制模式操作第一功能时,车机会对该第一功能进行评估,以确认该第一功能是否可以通过语音控制模式进行操作。比如微信功能,可以利用文字和语音的转化而进行操作。再比如,某些娱乐功能,需要占用用户较高的视觉资源和认知资源,且无法通过语音进行操作,就会提示用户这类功能会提高驾驶风险而不能使用。
示例性的,参见图15所示,为不能通过语音控制模式操作第一功能的提示。一些功能无法利用语音进行简单的操控而实现目的,还需要用户进一步的思考才能实现最终功能,故,这部分功能应该完全禁止使用。比如,用户操作图10所示用户界面上的第一预设按钮或第一功能图标后,车机会语音播报“当前操作存在风险”以提示用户此时操作第一功能会提高驾驶风险。车机继续播报“请问需要什么帮助”。用户可以回答想要进行的操作,比如图10所示的娱乐功能,则用户可以语音命令车机“打开娱乐功能”。此时,车机需要判断当前用户命令启动的功能是否可以通过语音控制完成操作,由于娱乐功能需要用户手动参与而不能单纯的依靠语音操作,所以车机会判定当前状态无法使用该功能,进而语音播报“当前场景娱乐功能风险较大,无法运行”以阻止用户使用该功能,保证用户行车安全。
需要说明的是,当用户在语音控制模式下完成当前操作后,车机可以根据当前车辆所处场景,显示相应的用户界面。也可以当用户在语音控制模式下完成当前操作后,车机不必立即根据当前车辆所处场景切换用户界面,而是在预设时间 段内保留显示当前用户界面,进而使得用户可以继续利用语音控制模式操作当前功能,使得用户不必重新操作第一操作按钮进入当前功能。比如,参见图13、图14和图15中,当前用户界面中均包含有一“麦克风”图标,在预设时间段内,用户可以通过点击该图标继续使用当前功能进行相关的操作。比如,在图13的示例中,当用户确认当前车机回复的消息正确,但之后还想继续回复妈妈消息,则可以点击“麦克风”图标,再次命令车机回复相应的内容。或者,可以点击“麦克风”图标,命令车机“查看下一条新消息”或“回复同事A“明天我不上班””等。通过在语音控制模式的用户界面中加入该功能,可以减少用户的重复操作,提高效率,并且可以进一步的使用户将注意力集中于车辆驾驶。
可以理解的是,“麦克风”图标仅为示例性的表示,还可以利用其它图标或者其它形式使车机满足上述功能,本申请实施例对此不做具体限定。当然,也可以不设置上述图标用于实现继续使用语音控制模式,车机可以直接配置为在语音控制模式下,当用户当前语音操作完毕后,在预设时间段内保持均为语音控制模式,以便用户可以直接利用语音操作车机触摸屏上的功能。对此本申请实施例不做具体限定。
可选的,上述步骤S404中,当车机确定车辆处于预设场景时,参见图5所示,不必立即执行步骤S405,而是先执行步骤S407,判断当前车速是否大于第二预设速度,其中,第二预设速度为预设场景对应的最小限速。即可以通过执行步骤S407,使得当用户车辆处于预设场景而车速较低时,也可以使用车机触摸屏上的全部功能,为用户提供更好的体验。
S407、车机确定当前车速大于第二预设速度。若车机确定当前车速大于第二预设速度,则执行步骤S405。若车机确定当前车速小于等于第二预设速度,则执行步骤S403。
具体的,第二预设速度为预设场景对应限速中的最小限速,可以根据不同的预设场景分别对应设定不同的限速(第二预设速度),将所有第二预设速度集合构成第二预设速度集。道路情况较为复杂时,用户可能同时处于几个预设场景中。比如车辆夜间行驶经过高危路段,此时,车机确定车辆处于两种预设场景“夜间行驶”和“高危路段”,则需要根据第二预设速度集获得当前两个场景对应的两个第二预设速度,进而获得其中较小的一个第二预设速度作为车机显示用户界面的判断依据。当用户驾驶车辆处在一个预设场景,则可以根据第二预设速度集获得当前场景对应的第二预设速度,并将该第二预设速度作为车机显示用户界面的判断依据。当用户驾驶车辆当前车速高于所处任意场景对应的第二预设速度时,就认为此时车速较高,需要高度集中注意力于驾驶车辆,即步骤S407判断为“是”。当然,也可以将预设场景对应的限速统一制定为某一第二预设速度,当用户当前车速高于第二预设速度时,即判定为“是”。上述第二预设速度集和第二预设速度为厂家根据车辆性能预设置的或者为车机根据用户操作习惯预配置的崔次本申请实施例不做具体限定。
示例性的,根据不同的预设场景分别设定不同的限速界限(第二预设速度),构成第二预设速度集合。比如在人流密集路段行驶时,第二预设速度设置为30km/h;在夜间行驶时,第二预设速度设置为40km/h;在高危路段行驶时,第二预设速度设置为 15km/h;在恶劣天气行驶时,第二预设速度设置为20km/h;在限速为A路段行驶时,第二预设速度设置为50%*A,比如当前限速60km/h时,则设置第二预设速度为30km/h。那么,就可以得到第二预设速度集{30,40,15,20,50%*A}km/h。之后,车机根据当前用户所处的预设场景判断分别对应的第二预设速度,进而获得当前车辆所处场景对应的所有的第二预设速度中的最小第二预设速度。当前车速大于该最小第二预设速度,说明车辆当前处于需要高度集中注意力的场景,进而就需要用户将注意力集中在车辆驾驶上,因此,执行步骤S405,即车机显示隐藏或不允许用户操作第一功能的第一用户界面。
示例性的,因为在上述预设场景中,有些场景不好进行限速区分,或者说不同预设场景对应的第二预设速度相近。因此,可以为预设场景制定统一的第二预设速度。比如,预设场景中将第二预设速度统一设置为20km/h。此时,再根据用户驾驶车辆行驶的当前车速,判断当前车速是否大于第二预设速度,进而判定用户当前是否需要高度集中注意力,即使得车机显示的对应的用户界面。
在另一种可能的实现方式中,由于即使处于相同的预设场景,在不同的条件下也有可能制定出不同的第二预设速度。因此结合不同用户的不同条件或者车辆性能的不同状况条件,制定第二预设速度。
可选的,由于用户的驾驶熟练程度会影响用户对车机的操作使用。因此,可以根据用户驾车年限等影响用户驾车熟练程度的因素,设置相应场景的第二预设速度。例如对于新手司机来说,车速超过55km/h时就处于高速驾驶状态,而不能将过多的注意力用于操作车机;但是对于老司机而言,车速超过65km/h时不能将过多的注意力用于车机操作。即,可以根据用户驾驶年龄等因素判定用户驾车的熟练程度,进而针对不同场景设定不同的第二预设速度构成第二预设速度集。最终,可以由用户进行选择不同的第二预设速度集或者由厂家进行首次配置后续再根据年检进行更改。当然,也可以由车机根据用户的操作进而为用户配置对应的模式。其中,不同的配置模式即为针对不同的驾驶熟练程度配置的不同第二预设速度集合。例如:可以根据用户的驾驶熟练程度将用户分为三个级别:A级,B级和C级。那么可以认为A级的用户驾车最为熟练,可以将A级用户在不同预设场景对应的第二预设速度设置为较高的速度;B级用户的驾驶熟练程度为一般级别,可以将B级用户在不同预设场景对应的第二预设速度设置为中等水平,并且可以将B级用户对应的第二速度集合作为默认第二预设速度集合使得无法确定用户驾驶熟练程度的车辆均配置B级第二预设速度集合。C级用户则可以使用针对新手司机设置的第二预设速度集合。
可选的,用户的健康状况或是年龄等情况也会对驾驶安全产生一定的影响。因此,可以根据用户年龄等影响用户驾车状态的因素,设置相应场景的第二预设速度。例如,用户患有心脏病,不能更好的应对突发状况,那么就需要将该用户的第二预设速度集合设置的较低,以保证用户驾驶安全。或者,用户年龄比较大,也会使得其反应速度较慢,进而也需要将其的第二预设速度集合制定的较低。当然,也可以利用车辆内部或者手机的摄像头捕捉用户面部表情,进而采用图像分析技术,分析用户当前状态是否异常,如果用户状态不佳,车机就可以制定较低的第二预设速度集合,以保证用户可以应对预设场景中的情况,保证用户行驶安全。
可选的,车辆的性能对于突发状况的处理能力也会不同。因此,可以根据不同的车辆性能设置不同预设场景对应的第二预设速度集合。例如性能较差的车辆,其制动功能可能也会较差,进而面对突发情况时无法及时刹车保证用户安全。因此,也需要制定较低的第二预设速度集合,使得用户在较低的车速时才可以触摸操作车机触摸屏的全部功能,进而保证用户的注意力足够应对突发状况。
当然,除上述列举的几种情况,也会有其他用户及车辆的条件对不同预设场景第二预设速度的设置产生影响,对此本申请实施例不做具体限定。
在一种可能的实现方式中,还可以设置第一时间阈值。比如当用户界面为第一用户界面时,可能下一时刻用户所处场景满足车机触摸屏显示包含所有功能的用户界面的条件,那么也不必立即进行界面切换。而需要在当前状态保持时间超过第一时间阈值后再进行用户界面切换。因为车辆在道路上行驶情况较为复杂,有可能出现突发状况临时使得车速降低,此时,需要用户将更多的注意力用于观察路况进行分析,而不能将视觉资源和认知资源过多的用于用户界面操作,因此设置第一时间阈值可以进一步的保证用户的行车安全。
在一种可能的实现方式中,当用户在显示第二用户界面的触摸屏上操作未完成,而用户所处场景切换为需要车机显示第一用户界面时,需要立即中断用户当前操作,进行用户界面切换,以保证用户行车安全。或者,也可以在用户完成当前操作后再进行用户界面切换操作,但同时也应语音播报提示用户当前场景存在驾驶风险,在满足用户需求的同时保证用户行车安全。
由此可见,本申请提供的显示用户界面方法,可以在设置预设场景和第一预设速度的基础上再设置第二预设速度,进而可以根据车辆所处的场景和行驶的实时速度判定应该显示的用户界面,在需要用户高度集中注意力的时候,车机可以显示第一用户界面,进而保证用户行驶安全。并且,还可以通过检测用户第二操作功能的设置,使得用户可以在第一用户界面获得其他不必占用用户视觉资源或认知资源的方式操作第一功能,提高用户体验。以改善现有技术中,用户界面时刻显示全部功能,使得用户在需要高度集中注意力驾驶车辆时还需要分心操作占用视觉资源和认知资源较高的功能,而使得行驶不安全的情况。
本申请实施例还提供了一种车载终端,可以包括:启动单元,采集单元,处理单元,显示单元等。这些单元可以执行上述实施例中的各个步骤,以实现显示用户界面的方法。例如,启动单元,用于支持该车载终端执行图4中的S401中车机的启动等。采集单元,用于支持该车载终端执行图4中的S402、S404、S406中车机对于车速,车辆所处场景环境信息及用户语音信息等的采集等。处理单元,用于支持该车载终端执行图4中的S402、S404、S406,图5中的S407等。显示单元,用于支持该车载终端执行图4中的S403、S405用户界面的显示等。和/或用于本文所描述的方案的其它过程。
本申请实施例还提供了一种车载终端,包括一个或多个处理器;存储器;触摸屏,用于检测触摸操作,以及显示界面。其中,存储器中存储有指令,当指令被一个或多个处理器执行时,使得车载终端执行上述实施例中的各个步骤,以实现上述实施例中的显示用户界面的方法。
示例性的,当该车载终端为图3所示的设备时,该车载终端中的处理器可以为图3所示的处理器210,该车载终端中的存储器可以为图3所示的存储器220,该车载终端中的触摸屏可以为图3所示的显示屏260和触摸传感器290E的组合。
本申请实施例还提供一种芯片系统,如图16所示,该芯片系统包括至少一个处理器1601和至少一个接口电路1602。处理器1601和接口电路1602可通过线路互联。例如,接口电路1602可用于从其它装置(例如电子设备200的存储器)接收信号。又例如,接口电路1602可用于向其它装置(例如处理器1601)发送信号。示例性的,接口电路1602可读取存储器中存储的指令,并将该指令发送给处理器1601。当所述指令被处理器1601执行时,可使得电子设备执行上述实施例中的电子设备200(比如,车机)执行的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在车载终端上运行时,使得车载终端执行上述相关方法步骤实现上述实施例中的显示用户界面的方法。
本申请实施例还提供一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的显示用户界面的方法。
另外,本申请的实施例还提供一种装置,该装置具体可以是组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使装置执行上述各方法实施例中的显示用户界面的方法。
其中,本申请实施例提供的车载终端、芯片,计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的方法,可以通过其它的方式实现。例如,以上所描述的车载终端实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,模块或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以 是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序指令的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (15)
- 一种显示用户界面的方法,应用于车载终端,其特征在于,所述方法包括:若车辆处于驾驶状态,所述车载终端判断所述车辆是否满足第一预设条件;所述第一预设条件包括所述车辆处于预设场景或当前车速大于第一预设速度;若确定所述车辆满足第一预设条件,所述车载终端显示第一用户界面,所述第一用户界面不显示第一功能;或者,所述第一用户界面中以第一预设方式显示所述第一功能,所述第一预设方式用于指示所述第一功能无法被用户操作。
- 根据权利要求1所述的显示用户界面的方法,其特征在于,所述第一用户界面中以第二预设方式显示第二功能,所述第二预设方式用于指示允许用户操作第二功能。
- 根据权利要求1所述的显示用户界面的方法,其特征在于,在所述车载终端显示第一用户界面之后,所述方法还包括:若所述车载终端检测到用户在所述第一用户界面的第一操作,则所述车载终端显示第三用户界面;所述第三用户界面中仅以所述第一预设方式显示所述第一功能;其中,所述第一操作为所述用户在所述车载终端触摸屏上的如下任一种操作:点击操作、滑动操作、预设按压操作、预设手势操作。
- 根据权利要求1-3任一项所述的显示用户界面的方法,其特征在于,所述预设场景包括以下任一项或任几项:所述车辆行驶于人流密集路段,所述车辆行驶于高危路段、所述车辆行驶于夜间、所述车辆行驶于恶劣天气条件下、所述车辆行驶于限速路段。
- 根据权利要求1-4任一项所述的显示用户界面的方法,其特征在于,所述方法还包括:若所述车载终端检测到用户的第二操作,则所述车载终端启动语音控制模式;其中,所述第二操作为操作第一预设按钮或所述第一功能的图标,用于启动第一功能;所述语音控制模式用于用户与车载终端进行语音交互。
- 根据权利要求5所述的显示用户界面的方法,其特征在于,所述车载终端启动语音控制模式,包括:所述车载终端启动语音交互功能;所述车载终端根据所述语音交互功能提示用户可以通过第三操作操作所述第一功能,并告知用户所述第三操作的操作方法;其中,所述第三操作为语音交互形式的操作。
- 根据权利要求1-6任一项所述的显示用户界面的方法,其特征在于,所述第一预设条件还包括:所述车辆处于所述预设场景,且当前车速大于第二预设速度;其中,所述第二预设速度为所述预设场景对应的最小限速。
- 根据权利要求1-7任一项所述的显示用户界面的方法,其特征在于,所述方法还包括:若确定所述车辆未满足所述第一预设条件,则所述车载终端显示第二用户界面,所述第二用户界面中以第二预设方式显示所述第一功能和第二功能,所述第二预设方式用于表示允许用户操作所述第一功能和所述第二功能。
- 根据权利要求1-8任一项所述的显示用户界面的方法,其特征在于,判断所述车辆是否处于预设场景,包括:所述车载终端根据以下至少一项数据判断所述车辆是否行驶于人流密集路段:车辆行驶状态,车辆周围环境的图像,车辆行驶路线;和/或,所述车载终端根据以下至少一项数据判断所述车辆是否行驶于高危路段:车辆行驶路线,车辆行驶状态,车辆周围环境的图像;和/或,所述车载终端根据以下至少一项数据判断所述车辆是否行驶于夜间行驶:车辆所处的环境光线状况,车辆周围环境的图像,实时时间;和/或,所述车载终端根据以下至少一项数据判断所述车辆是否行驶于恶劣天气条件下:车辆所处的环境光线状况,周围环境的图像,天气软件的数据;和/或,所述车载终端根据以下至少一项数据判断所述车辆是否行驶于限速路段:车辆周围环境的图像,当前道路信息。
- 根据权利要求1-9任一项所述的显示用户界面的方法,其特征在于,所述第一功能包括如下任意一项或几项:娱乐功能、信息功能、通话功能中的拨号功能、通话功能中的搜索联系人功能、导航功能中的搜索地址功能、音乐功能中的搜索歌曲功能、天气功能中的天气查询功能。
- 根据权利要求2-10任一项所述的显示用户界面的方法,其特征在于,第二功能包括如下任意一项或几项:通话功能的最近联系人功能、导航功能的推荐地址功能、音乐功能中的最近播放列表功能、天气功能中的本地实时天气功能。
- 一种车载终端,其特征在于,包括:一个或多个处理器;存储器,所述存储器中存储有指令;触摸屏,用于检测触摸操作,以及显示用户界面;当所述指令被所述一个或多个处理器执行时,使得所述车载终端执行如权利要求1-11中任一项所述的显示用户界面的方法。
- 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在车载终端上运行时,使得所述车载终端执行如权利要求1-11中任一项所述的显示用户界面的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-11中任一项所述的显示用户界面的方法。
- 一种芯片系统,其特征在于,包括至少一个处理器和至少一个接口电路,所述至少一个接口电路用于执行收发功能,并将指令发送给所述至少一个处理器,当所述至少一个处理器执行所述指令时,所述至少一个处理器执行如权利要求1-11中任一项所述的显示用户界面的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910808405.0A CN110716776A (zh) | 2019-08-29 | 2019-08-29 | 一种显示用户界面的方法及车载终端 |
CN201910808405.0 | 2019-08-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021037251A1 true WO2021037251A1 (zh) | 2021-03-04 |
Family
ID=69209502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/112285 WO2021037251A1 (zh) | 2019-08-29 | 2020-08-28 | 一种显示用户界面的方法及车载终端 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110716776A (zh) |
WO (1) | WO2021037251A1 (zh) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110716776A (zh) * | 2019-08-29 | 2020-01-21 | 华为终端有限公司 | 一种显示用户界面的方法及车载终端 |
CN111369709A (zh) | 2020-04-03 | 2020-07-03 | 中信戴卡股份有限公司 | 行车场景确定方法、装置、计算机、存储介质及系统 |
CN111746401B (zh) * | 2020-06-29 | 2022-03-11 | 广州橙行智动汽车科技有限公司 | 一种基于三维泊车的交互方法和车辆 |
CN111899545B (zh) * | 2020-07-29 | 2021-11-16 | Tcl通讯(宁波)有限公司 | 一种行车提醒方法、装置、存储介质及移动终端 |
CN112078474B (zh) * | 2020-09-11 | 2022-02-15 | 广州小鹏汽车科技有限公司 | 一种车辆控制的方法和装置 |
CN112078520B (zh) * | 2020-09-11 | 2022-07-08 | 广州小鹏汽车科技有限公司 | 一种车辆控制的方法和装置 |
CN112230764B (zh) * | 2020-09-27 | 2022-03-11 | 中国人民解放军空军特色医学中心 | 一种触觉感知载体功能切换方法、装置及电子设备 |
CN112389198B (zh) * | 2020-11-17 | 2022-12-13 | 广州小鹏汽车科技有限公司 | 显示控制方法、装置、车辆及存储介质 |
CN112590807B (zh) * | 2020-12-11 | 2022-05-10 | 广州橙行智动汽车科技有限公司 | 一种针对车辆部件的车控卡片交互方法、装置 |
WO2022134106A1 (zh) * | 2020-12-25 | 2022-06-30 | 华为技术有限公司 | 中控屏幕显示方法及相关设备 |
CN114446083A (zh) * | 2021-12-31 | 2022-05-06 | 珠海华发新科技投资控股有限公司 | 一种智慧社区服务系统 |
CN114368288B (zh) * | 2022-01-05 | 2023-09-12 | 一汽解放汽车有限公司 | 车载终端的显控方法、装置、计算机设备、存储介质 |
CN114461330A (zh) * | 2022-02-11 | 2022-05-10 | 腾讯科技(深圳)有限公司 | 一种车载终端的显示控制方法和相关装置 |
CN116841484A (zh) * | 2022-03-25 | 2023-10-03 | 华为技术有限公司 | 车载应用自适应配置方法及车载终端 |
CN117009927A (zh) * | 2022-04-27 | 2023-11-07 | 华为技术有限公司 | 一种应用显示方法、装置及车载终端 |
CN115016361A (zh) * | 2022-07-01 | 2022-09-06 | 中国第一汽车股份有限公司 | 一种车载无人机控制方法、装置、电子设备及介质 |
CN117508203A (zh) * | 2022-07-30 | 2024-02-06 | 华为技术有限公司 | 车辆中控设备的应用控制方法及相关装置 |
CN115914006A (zh) * | 2022-11-01 | 2023-04-04 | 长城汽车股份有限公司 | 车辆的网络信息的处理方法、装置、车辆及电子装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009059151A (ja) * | 2007-08-31 | 2009-03-19 | Panasonic Corp | メニュー画面の個人化 |
CN104750379A (zh) * | 2013-12-30 | 2015-07-01 | 上海博泰悦臻网络技术服务有限公司 | 车载系统的用户界面显示方法及装置 |
CN105321515A (zh) * | 2014-06-17 | 2016-02-10 | 中兴通讯股份有限公司 | 一种移动终端的车载应用控制方法、装置及终端 |
CN106445296A (zh) * | 2016-09-27 | 2017-02-22 | 奇瑞汽车股份有限公司 | 显示车载应用程序图标的方法和装置 |
CN108600519A (zh) * | 2018-03-31 | 2018-09-28 | 广东欧珀移动通信有限公司 | 来电控制方法及相关产品 |
US20190080691A1 (en) * | 2017-09-12 | 2019-03-14 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for language selection |
CN110716776A (zh) * | 2019-08-29 | 2020-01-21 | 华为终端有限公司 | 一种显示用户界面的方法及车载终端 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110214162A1 (en) * | 2010-02-26 | 2011-09-01 | Nokia Corporation | Method and appartus for providing cooperative enablement of user input options |
US9374423B2 (en) * | 2012-10-16 | 2016-06-21 | Excelfore Corporation | System and method for monitoring apps in a vehicle or in a smartphone to reduce driver distraction |
US9854432B2 (en) * | 2014-09-18 | 2017-12-26 | Ford Global Technologies, Llc | Method and apparatus for selective mobile application lockout |
US10501093B2 (en) * | 2016-05-17 | 2019-12-10 | Google Llc | Application execution while operating vehicle |
-
2019
- 2019-08-29 CN CN201910808405.0A patent/CN110716776A/zh active Pending
-
2020
- 2020-08-28 WO PCT/CN2020/112285 patent/WO2021037251A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009059151A (ja) * | 2007-08-31 | 2009-03-19 | Panasonic Corp | メニュー画面の個人化 |
CN104750379A (zh) * | 2013-12-30 | 2015-07-01 | 上海博泰悦臻网络技术服务有限公司 | 车载系统的用户界面显示方法及装置 |
CN105321515A (zh) * | 2014-06-17 | 2016-02-10 | 中兴通讯股份有限公司 | 一种移动终端的车载应用控制方法、装置及终端 |
CN106445296A (zh) * | 2016-09-27 | 2017-02-22 | 奇瑞汽车股份有限公司 | 显示车载应用程序图标的方法和装置 |
US20190080691A1 (en) * | 2017-09-12 | 2019-03-14 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for language selection |
CN108600519A (zh) * | 2018-03-31 | 2018-09-28 | 广东欧珀移动通信有限公司 | 来电控制方法及相关产品 |
CN110716776A (zh) * | 2019-08-29 | 2020-01-21 | 华为终端有限公司 | 一种显示用户界面的方法及车载终端 |
Also Published As
Publication number | Publication date |
---|---|
CN110716776A (zh) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021037251A1 (zh) | 一种显示用户界面的方法及车载终端 | |
WO2020244622A1 (zh) | 一种通知的提示方法、终端及系统 | |
CN110910872B (zh) | 语音交互方法及装置 | |
CN111724775B (zh) | 一种语音交互方法及电子设备 | |
CN112861638A (zh) | 一种投屏方法及装置 | |
CN112397062A (zh) | 语音交互方法、装置、终端及存储介质 | |
CN110489048A (zh) | 应用快速启动方法及相关装置 | |
CN113419697A (zh) | 投屏方法、投屏装置、电子设备、车机和投屏系统 | |
CN112954648B (zh) | 一种移动终端和车载终端的交互方法、终端以及系统 | |
CN112923943A (zh) | 辅助导航方法和电子设备 | |
CN111368765A (zh) | 车辆位置的确定方法、装置、电子设备和车载设备 | |
CN115641867B (zh) | 语音处理方法和终端设备 | |
WO2024001940A1 (zh) | 寻车的方法、装置和电子设备 | |
CN111223311B (zh) | 车流控制方法、装置、系统、控制设备和存储介质 | |
CN111339513B (zh) | 数据分享的方法和装置 | |
CN116774203A (zh) | 一种感知目标的方法和装置 | |
CN115695636B (zh) | 一种智能语音交互的方法及电子设备 | |
WO2023206590A1 (zh) | 一种交互方法和电子装置 | |
CN115551156B (zh) | 一种发光闪烁的方法及可穿戴设备 | |
CN116844375B (zh) | 停车信息的显示方法和电子设备 | |
WO2023104075A1 (zh) | 一种分享导航信息的方法、电子设备和系统 | |
WO2024104174A1 (zh) | 显示方法、存储介质及终端设备 | |
WO2024041180A1 (zh) | 路径规划方法及装置 | |
WO2023274136A1 (zh) | 一种设备控制方法及相关装置 | |
CN117850715A (zh) | 投屏显示方法、电子设备及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20857803 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20857803 Country of ref document: EP Kind code of ref document: A1 |