CN115534822A - Method, device and mobile carrier for controlling display - Google Patents

Method, device and mobile carrier for controlling display Download PDF

Info

Publication number
CN115534822A
CN115534822A CN202210961076.5A CN202210961076A CN115534822A CN 115534822 A CN115534822 A CN 115534822A CN 202210961076 A CN202210961076 A CN 202210961076A CN 115534822 A CN115534822 A CN 115534822A
Authority
CN
China
Prior art keywords
window
vehicle
image information
display
controlling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210961076.5A
Other languages
Chinese (zh)
Inventor
王剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210961076.5A priority Critical patent/CN115534822A/en
Publication of CN115534822A publication Critical patent/CN115534822A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • B60R11/0229Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for displays, e.g. cathodic tubes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41422Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance located in transportation means, e.g. personal vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0001Arrangements for holding or mounting articles, not otherwise provided for characterised by position
    • B60R2011/0003Arrangements for holding or mounting articles, not otherwise provided for characterised by position inside the vehicle
    • B60R2011/0026Windows, e.g. windscreen

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Social Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The application provides a method, a device and a mobile carrier for controlling display, wherein the method comprises the following steps: generating first image information according to the received first instruction and identity information of a first user in a cabin of the vehicle; and controlling a first window of the vehicle to display the first image information. The embodiment of the application can be applied to new energy vehicles or intelligent vehicles, and is beneficial to improving the comfort level and the safety of drivers and passengers.

Description

Method and device for controlling display and mobile carrier
Technical Field
The application is applied to the field of intelligent cabins, and more particularly relates to a method and device for controlling display and a mobile carrier.
Background
Under more and more scenes, drivers and passengers in vehicles need to externally display two-dimensional codes (such as payment codes or health codes), and the windows of the vehicles are opened for a long time in the process of displaying the two-dimensional codes by the drivers and passengers. When the outside weather is poor, or in an epidemic situation background, the comfort level of drivers and passengers can be influenced by opening the vehicle window for a long time, and even certain safety is lacked.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling display and a mobile carrier, which can display information outwards by taking a vehicle window as a medium, realize non-contact information verification and contribute to improving the comfort level and safety of drivers and passengers.
The mobile carrier in the present application may include an on-road vehicle, a water vehicle, an air vehicle, an industrial device, an agricultural device, or an entertainment device, etc. For example, the mobile carrier may be a vehicle, which is a vehicle in a broad concept, and may be a vehicle (such as a commercial vehicle, a passenger vehicle, a motorcycle, an aircraft, a train, etc.), an industrial vehicle (such as a forklift, a trailer, a tractor, etc.), an engineering vehicle (such as an excavator, a bulldozer, a crane, etc.), an agricultural equipment (such as a mower, a harvester, etc.), an amusement device, a toy vehicle, etc., and the embodiment of the present application does not specifically limit the type of the vehicle. For another example, the mobile carrier may be an airplane, a ship, or other transportation means.
In a first aspect, a method of controlling a display is provided, the method comprising: generating first image information according to the received first instruction and identity information of a first user in a cabin of the vehicle; and controlling a first window of the vehicle to display the first image information.
The first image information is used for checking a machine or a person outside the vehicle.
In the technical scheme, the vehicle window can be used as a medium to display information externally, non-contact information verification is realized, and the comfort level and safety of drivers and passengers are improved. In a scene, when the epidemic situation is comparatively serious, keep the negative pressure environment in the passenger cabin, perhaps minimize the exchange of air and outside air in the passenger cabin, can show and reduce the epidemic situation and spread the risk, based on above-mentioned technical scheme, realize the first image information verification of contactless under the prerequisite of not driving the door window, can effectively ensure the healthy safety of the personnel of driving and taking passengers in the passenger cabin and/or the healthy safety of the staff outside the passenger cabin. In another scene, the vehicle external environment is comparatively abominable, for example, sand blown by the wind is great, or the high temperature or low excessively, or for sleet weather, if open the door window for a long time, external sand blown by the wind may get into the passenger cabin, or the unsuitable air of external temperature can get into the passenger cabin, or the passenger cabin may be scraped into to the snow and rain, no matter which kind of above-mentioned condition takes place, all can influence driver and passenger's comfort level, based on above-mentioned technical scheme, realize the first image information check of contactless under the prerequisite of not opening the door window, driver and passenger's comfort level in the passenger cabin can effectively be ensured.
With reference to the first aspect, in certain implementations of the first aspect, the first instruction includes at least one of: a voice instruction, an instruction generated from an input to an on-vehicle screen of the vehicle, an instruction generated from an operation of a physical function key of the vehicle, and an instruction generated from an image recognition result. Illustratively, the physical function key may be a stick, or may also be a key, or may also be another physical function key.
With reference to the first aspect, in certain implementations of the first aspect, the first image information includes a two-dimensional code image.
With reference to the first aspect, in certain implementations of the first aspect, the controlling a first window of the vehicle to display the first image information includes: and controlling the first window to display the first image information according to a first position of the first user in the cabin, wherein the first position corresponds to the first window.
In the technical scheme, the display position of the first image information can be adaptively adjusted according to the position of the first user in the cabin, and when a plurality of users exist in the vehicle, personnel or equipment outside the vehicle can clearly know which user the image information on the window corresponds to.
With reference to the first aspect, in certain implementations of the first aspect, the controlling a first window of the vehicle to display the first image information includes: and controlling the first window to display the first image information according to a second position where the equipment for detecting or identifying the first image information is located, wherein the second position corresponds to the first window.
For example, the device for detecting or identifying the first image may be a handheld device of an off-board person; or it may be a device that automatically detects or recognizes the first image information, for example, a device that recognizes or detects a two-dimensional code provided at a bar support of a barrier.
For example, if the device is arranged on the left side of the vehicle, the first image information is displayed through at least one of the left vehicle windows; if the device is arranged on the right side of the vehicle, displaying first image information through at least one of windows on the right side of the vehicle; if the device is arranged in front of the vehicle, the first image information is displayed through a front windshield of the vehicle.
In some possible implementations, the first window may also be controlled to display the first image information according to a location where a person detecting or identifying the first image information is located, where the location corresponds to the first window.
In some possible implementations, the first window is controlled to display the first image information according to a first location of the first user in the cabin and a second location where a device that detects or identifies the first image information is located. In an exemplary manner, the first and second electrodes are,
in the technical scheme, the first image information is displayed through the vehicle window on one side of the device or the person for detecting or identifying the first image information, so that the efficiency of checking the first image information is improved, and the traffic efficiency is further improved.
With reference to the first aspect, in certain implementations of the first aspect, the controlling a first window of the vehicle to display the first image information includes: and controlling a first area of the first window to display the first image information, wherein the first area corresponds to a third position of the first user in the cockpit.
With reference to the first aspect, in certain implementations of the first aspect, the controlling a first window of the vehicle to display the first image information includes: and controlling the HUD to display the first image information on the first window through head-up display.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and prompting the position of the first vehicle window and/or the content of the first image information.
In some possible implementations, the content of the first image information may be prompted by displaying text on the first vehicle window. For example, when the first image information includes a payment code, the content of the first image information may be prompted by the text "please scan the payment code here".
In some possible implementation manners, the position of the first window and/or the content of the first image information may be prompted in a voice broadcast manner. For example, the first image information includes a payment code, and when the left front window (first window) displays the first image information, the position of the first window and the content of the first image information may be prompted by "please scan the left front window payment code for collection" through voice.
In the above technical scheme, when the verification or the identification of the first image information is performed, the staff or the equipment outside the vehicle may not know the action and/or the position of the two-dimensional code, and then the prompt may be performed to the staff outside the vehicle in a text or voice prompt manner, so that the staff outside the vehicle can scan the two-dimensional code, and the communication efficiency inside and outside the vehicle and the verification efficiency of the first image information are improved.
With reference to the first aspect, in certain implementations of the first aspect, the first window includes at least one of: the passenger car comprises a front windshield, a rear windshield, a main driving department window, a subsidiary driving department window, a rear row left side window of the passenger car and a rear row right side window of the passenger car.
Exemplarily, taking 5 seats of vehicles as an example, the above-mentioned "rear row" may be a second row of vehicles; taking 7-seat vehicles as an example, the "rear row" may be the second row and/or the third row. For other multi-seater vehicles, the rear rows may also include other rear rows in addition to the primary driving zone and the secondary driving zone.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and responding to a first input of the first user, and controlling the first window to be switched to a second window of the vehicle to display the first image information.
Illustratively, the first input may be a voice instruction, such as "display first image information in the front left window"; alternatively, the first input may be a physical function key input, for example, the physical function key is a shift lever, and a relationship between a direction of shifting the shift lever and the window is preset, for example: when the deflector rod is pulled upwards, image information is displayed on the main driving window; when the deflector rod is turned down, image information is displayed on the window of the copilot; when the shifting lever is shifted to the left, image information is displayed on the left window of the second row; and if the shifting lever is shifted to the right, image information is displayed on the right window of the second row. Further, when the first input is physical function key input, the vehicle window displaying the image information is determined according to the poking direction of the poking rod. Illustratively, the image information includes first image information.
In the technical scheme, a mode of controlling the vehicle window for displaying the image information is provided for drivers and passengers in the vehicle, and the interactive experience of a user when the user uses the cabin is improved.
In a second aspect, there is provided an apparatus for controlling a display, the apparatus comprising: a generating unit, configured to generate first image information according to the received first instruction and identity information of the first user in the cabin of the vehicle; and the processing unit is used for controlling a first window of the vehicle to display the first image information.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is configured to: and controlling the first window to display the first image information according to a first position of the first user in the cabin, wherein the first position corresponds to the first window.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is configured to: and controlling the first window to display the first image information according to a second position where the equipment for detecting or identifying the first image information is located, wherein the second position corresponds to the first window.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is configured to: and controlling a first area of the first window to display the first image information, wherein the first area corresponds to a third position of the first user in the cockpit.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is configured to: and controlling the HUD to display the first image information on the first window through head-up display.
With reference to the second aspect, in some implementations of the second aspect, the apparatus further includes a prompting unit configured to prompt a location of the first window and/or content of the first image information.
With reference to the second aspect, in certain implementations of the second aspect, the processing unit is configured to: and responding to a first input of the first user, and controlling the first window to be switched to a second window of the vehicle to display the first image information.
With reference to the second aspect, in certain implementations of the second aspect, the first window includes at least one of: the rear-row left window of the cabin and the rear-row right window of the cabin are arranged on the front windshield, the rear windshield, the main driving department window, the auxiliary driving department window, the rear-row left window of the cabin.
With reference to the second aspect, in some implementations of the second aspect, the first image information includes a two-dimensional code image.
With reference to the second aspect, in certain implementations of the second aspect, the first instructions include at least one of: a voice instruction, an instruction generated from an input to an on-vehicle screen of the vehicle, an instruction generated from an operation of a physical function key of the vehicle, and an instruction generated from an image recognition result.
In a third aspect, there is provided an apparatus for controlling a display, the apparatus comprising: a memory for storing a program; a processor configured to execute the program stored in the memory, and when the program stored in the memory is executed, the processor is configured to perform the method in any one of the possible implementations of the first aspect.
In a fourth aspect, a mobile carrier is provided, which includes the apparatus in any one of the possible implementations of the second or third aspect.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the mobile carrier is a vehicle.
In a fifth aspect, there is provided a computer program product, the computer program product comprising: computer program code for causing a computer to perform the method of any of the possible implementations of the first aspect described above, when said computer program code runs on a computer.
It should be noted that, all or part of the computer program code may be stored on the first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and this is not specifically limited in this embodiment of the present application.
In a sixth aspect, a computer-readable medium is provided, which stores instructions that, when executed by a processor, cause the processor to implement the method of any one of the possible implementations of the first aspect.
In a seventh aspect, a chip is provided, where the chip includes a processor, and is configured to call a computer program or a computer instruction stored in a memory, so that the processor executes the method in any one of the possible implementation manners of the first aspect.
With reference to the seventh aspect, in one possible implementation manner, the processor is coupled with the memory through an interface.
With reference to the seventh aspect, in a possible implementation manner, the chip system further includes a memory, where the memory stores a computer program or computer instructions.
Drawings
FIG. 1 is a functional block diagram illustration of a vehicle provided in an embodiment of the application;
FIG. 2 is a schematic view of a cabin scene of a vehicle provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a system architecture for controlling a display according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram of a method for controlling a display provided by an embodiment of the present application;
FIG. 5 is a diagram illustrating an application scenario of a method for controlling a display according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an application scenario of a method for controlling display according to an embodiment of the present application;
fig. 7 is a schematic diagram of an application scenario of a method for controlling display according to an embodiment of the present application;
FIG. 8 is a schematic block diagram of an apparatus for controlling a display provided by an embodiment of the present application;
fig. 9 is a further schematic block diagram of an apparatus for controlling display according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a functional block diagram illustration of a vehicle 100 provided in an embodiment of the present application. Vehicle 100 may include a perception system 120, a display device 130, and a computing platform 150, wherein perception system 120 may include one or more sensors that sense information about the environment surrounding vehicle 100. For example, the sensing system 120 may include a positioning system, which may be a Global Positioning System (GPS), a compass system or other positioning system, and an Inertial Measurement Unit (IMU). For another example, the sensing system 120 may further include one or more of a lidar, a millimeter-wave radar, an ultrasonic radar, a pressure sensor, a sound sensor, a vision sensor, and a camera. In some possible implementations, the vehicle 100 may also include a sound-emitting device, such as a speaker, for outputting audio to a user of the vehicle 100. In some possible implementations, the vehicle 100 may also perform voice interaction with the user through other external devices, such as a bluetooth headset, and the like, which is not specifically limited in this embodiment of the present application.
The display devices 130 in the cabin are mainly divided into two categories, the first category being a vehicle-mounted display screen; the second category is projection displays, such as Head Up Displays (HUDs), by means of which a projection can be made on a vehicle window for display. The vehicle-mounted display screen is a physical display screen, is an important component of the vehicle-mounted information entertainment system, and can be provided with a plurality of display screens in a cabin, such as a digital instrument display screen, a central control screen, a display screen in front of a passenger (also called a front passenger) at a copilot, a display screen in front of a left-side rear passenger and a display screen in front of a right-side rear passenger. Head-up displays, also known as head-up display systems. For example, for displaying driving information such as speed per hour, navigation, etc., on a display device (e.g., a windshield) in front of the driver. The sight line shifting time of the driver is reduced, pupil change caused by the sight line shifting of the driver is avoided, and the driving safety and the driving comfort are improved. The HUD includes, for example, a combination-type head-up display (C-HUD) system, a windshield-type head-up display (W-HUD) system, and an augmented reality-type head-up display (AR-HUD).
Some or all of the functions of the vehicle 100 may be controlled by the computing platform 150. Computing platform 150 may include one or more processors, such as processors 151 through 15n (n is a positive integer), which is a circuit having signal processing capabilities, and in one implementation, the processors may be a circuit having instruction reading and execution capabilities, such as a Central Processing Unit (CPU), a microprocessor, a Graphics Processing Unit (GPU) (which may be understood as a type of microprocessor), or a Digital Signal Processor (DSP), etc.; in another implementation, a processor may implement functions through the logical relationship of hardware circuits, which may be fixed or reconfigurable, such as a hardware circuit implemented by a processor as an application-specific integrated circuit (ASIC) or a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA). In the reconfigurable hardware circuit, the process of loading the configuration document by the processor to implement the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, it may also be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as a neural Network Processing Unit (NPU), a Tensor Processing Unit (TPU), a deep learning processing unit (DPU), or the like. In addition, the computing platform 150 may further include a memory for storing instructions, and some or all of the processors 151 to 15n may call the instructions in the memory and execute the instructions to implement corresponding functions.
In the embodiment of the present application, the processor may process the vehicle surrounding environment information acquired from the sensing system 120, or the processor may process the voice information of the driver and the crew acquired from the sensing system 120, and determine to perform the acquisition of the two-dimensional code information and the two-dimensional code generation. Further, the processor can also control the two-dimensional code to be displayed on the vehicle window. In some possible implementations, the two-dimensional code information and the two-dimensional code may also be stored in the form of data in a memory in the computing platform 150.
It should be understood that the above operations may be executed by the same processor, or may be executed by a plurality of processors, which is not specifically limited by the present application.
It should be understood that the above-mentioned components are only an example, and in the specific implementation process, components in the above-mentioned modules may be added or deleted according to actual needs, and fig. 1 should not be construed as limiting the embodiments of the present application.
Fig. 2 is a schematic view of a vehicle cabin scene provided in an embodiment of the present application. One or more cameras may be installed inside or outside the smart cabin for capturing images inside or outside the cabin, such as a camera of a Driver Monitor System (DMS), a camera of a Cabin Monitor System (CMS), and a camera of a tachograph (dashcam), and fig. 2 illustrates a camera installed at an a-pillar. The camera used for capturing the inside and the outside of the cabin can be the same camera or different cameras. In addition, an on-board display screen is further disposed in the cabin, and in fig. 2, the display screen disposed in the central control area is taken as an example, and related information can be displayed through the on-board display screen. It should be understood that, in the embodiment of the present application, the position of the camera for collecting the image information in the cockpit is not particularly limited. The camera may be located on the a-pillar shown in fig. 2, below the steering wheel, on the B-pillar, near the rear view mirror, or the like.
As described above, in more and more scenes, an occupant in a vehicle needs to externally display a two-dimensional code (for example, a payment code or a health code), and the occupant often needs to open a window for a long time in the process of displaying the two-dimensional code. When the outside weather is poor, or under the epidemic situation background, the comfort level of drivers and passengers can be influenced by opening the vehicle window for a long time, and even certain safety is lacked. Particularly, when a cabin of the vehicle needs to maintain a negative pressure environment, a window of the vehicle cannot be opened to display information such as a two-dimensional code. In view of this, the embodiment of the application provides a method and a device for controlling display, and a mobile carrier, which can display two-dimensional code information by using a vehicle window as a medium, realize non-contact two-dimensional code verification, and contribute to improving comfort and safety of drivers and passengers.
Fig. 3 is a schematic diagram illustrating a system framework for controlling a display according to an embodiment of the present application. As shown in fig. 3, the system includes a sensing module, a two-dimensional code generation module, an optical projection module, a display module, a payment code generator, a health code generator, and an account application device. The sensing module sends the acquired instruction or information to the two-dimensional code generation module to trigger the two-dimensional code generation module to generate the two-dimensional code. Exemplarily, the two-dimensional code generation module acquires two-dimensional code information from at least one of the payment code generator, the health code generator and the account application device according to the instruction or information acquired from the sensing module, and then generates a two-dimensional code, and sends the two-dimensional code to the optical projection module, and the optical projection module projects the two-dimensional code to the display module, such as a front windshield, a rear windshield, a main driving department window (or called a left front window), a subsidiary driving department window (or called a right front window), a second row of left windows (or called a left rear window), a second row of right windows (or called a right rear window), and the like. In one example, the sensing module may include one or more sensors in the sensing system 120 shown in fig. 1, for example, the sensing module may include a sound sensor, and the instructions obtained by the sensing module may include voice instructions in the cabin of the vehicle; or the sensing module may include a touch screen sensor, the instruction acquired by the sensing module may include a click signal detected by a vehicle-mounted display screen or a Human Machine Interface (HMI) shown in fig. 2; or the sensing module may include a physical function key, the instruction acquired by the sensing module may be a signal detected by the physical function key, such as a steering wheel lever signal. In some possible implementations, the sensing module may include one or more image capturing devices in the sensing system 120 shown in fig. 1, and further send an image obtained by the image capturing devices to the two-dimensional code generating device, where the two-dimensional code generating device performs text recognition on the picture to obtain an instruction of "please show the two-dimensional code". In yet another example, the two-dimensional code generation module may include one or more processors in the computing platform 150 shown in fig. 1, the optical projection module may include one or more processors in the computing platform 150 shown in fig. 1, and the display module may further include one or more display devices in the display apparatus 130 shown in fig. 1.
It should be understood that the above modules and devices are only one example, and in practical applications, the above modules and devices may be added or deleted according to actual needs.
Fig. 4 shows a schematic flowchart of a method for controlling a display according to an embodiment of the present application, where the method 400 may be applied to the vehicle 100 shown in fig. 1 or the cabin shown in fig. 2, and the method may also be executed by the system shown in fig. 3. The method comprises the following steps:
s410, first image information is generated according to the received first instruction and the identity information of the first user in the cabin of the vehicle.
Illustratively, the vehicle may include the vehicle 100 in the above-described embodiment. The first user may be the primary driver's cabin user or may be a user elsewhere in the cabin of the vehicle. The identity information includes, but is not limited to, biometric information stored in the vehicle, an account number, and the like. The biometric information includes, but is not limited to, fingerprints, palm prints, voice information, face information, iris information, gait information, and the like; the account number can comprise account number information of a login vehicle machine system and the like. The first image information may include a two-dimensional code (e.g., a payment code, a health code) associated with the first user; or may also include other images or information that need to be displayed to machines or people outside the vehicle, such as relevant transit certificates, etc.; or, the first image information may further include an interface image including a two-dimensional code, for example, an interface including a health code that pops up by the mobile terminal after the first user scans the venue code using the mobile terminal.
In some possible implementations, the vehicle stores identity information of one or more users, and if the identity information of the first user matches with the identity information of the one or more users, the first image information may be generated according to the received first instruction and the identity information of the first user.
Illustratively, the first instruction may include, but is not limited to: a voice instruction, an instruction generated according to an input to the in-vehicle screen, an instruction generated according to an operation to a physical function key (such as a stick, a key, etc.), and an instruction generated according to an image recognition result.
First, a voice instruction of a user, for example, a vehicle may obtain an audio of the voice instruction through a sound sensor, and a speech semantic meaning "please display first image information" is analyzed through a processing device.
Secondly, an instruction generated according to the input of the vehicle-mounted screen, for example, when the vehicle-mounted screen is on the main desktop page, the touch sensor detects a right slide signal on the screen of the electronic device, and then an instruction for controlling the vehicle to generate the first image information is generated according to the right slide signal.
Thirdly, according to the instruction generated by the operation of the physical function key, for example, when the pressure sensor at the vehicle lever collects the pressing signal, the processing device (e.g., a processor) of the vehicle may process the pressing signal and perform corresponding control, for example, the processing device of the vehicle may generate the instruction for controlling the vehicle to generate the first image information according to the pressing signal.
Fourth, an instruction is generated according to the image recognition result, for example, the camera device of the vehicle shoots the paint of the surrounding environment of the vehicle, the image is recognized through the processing device, the character information of 'please display the first image information' is obtained, and an instruction for controlling the vehicle to generate the first image information is generated according to the character information.
And S420, controlling a first window of the vehicle to display the first image information.
For example, the first window may include at least one of a front windshield, a left front window, a right front window, a left rear window, and a right rear window.
Optionally, the first window is controlled to display the first image information according to a first position of the first user in the cabin, the first position corresponding to the first window.
For example, the "first position corresponds to a first window" may include: the first position may be a main driving position, and the first window may be a main driving window and/or a front windshield in the above embodiment, that is, the main driving position corresponds to the main driving window; the first position may be a passenger position, and the first window may be a passenger window and/or a front windshield in the above embodiment, that is, the passenger position corresponds to the passenger window; the first position may be a second row left position of the vehicle, and the first window may be a second row left window in the above embodiment, that is, the second row left position corresponds to the second row left window; the first position may be a second row right position of the vehicle, and the first window may be a second row right window in the above embodiment, that is, the second row right position corresponds to the second row right window.
Illustratively, a method of determining a first position of a first user in a cockpit includes, but is not limited to: determining, by an acoustic wave sensor, a first location of a first user; determining a first position of a first user through a camera device or an in-cabin vision sensor; a first position of a first user is determined by a pressure sensor disposed at a seat.
For example, the vehicle may determine the actual location of the first user within the cabin based on audio information obtained by the acoustic wave sensor. The audio information may be audio information obtained by excluding various invalid audio information from the collected audio information inside the vehicle, and the invalid audio information may be audio information with too low volume. The sound source position may be a position where a sound source corresponding to the audio information is located, and the sound source position may be a relative position to a vehicle-mounted screen based on sound source localization, or may be a specific position coordinate.
For example, the sound source position may be determined from audio information collected by a plurality of sound wave sensors based on a time difference of arrival (TDOA) principle. For example, when the acoustic sensors a and B respectively detect that audio is emitted from the acoustic source S, where time of a sound source signal of the acoustic source S reaching the acoustic sensor a is t1, and time of the sound source signal reaching the acoustic sensor B is t2, dt = | t1-t2|, a distance between the acoustic source S and the acoustic sensor a is set to be AS, a distance between the acoustic source S and the acoustic sensor B is set to be BS, and a sound velocity is set to be c, dt = t1-t2= AS/c-BS/c can be obtained, and then one of the sensors is selected AS a reference point according to a distance a between the two acoustic sensors, so that a position of the acoustic source can be determined.
For example, the determining the first position of the first user by the camera device or the in-cabin vision sensor may specifically be: and the actual position of the user in the cabin is determined according to the face information of the user by the face information of the user acquired by the camera device or the vision sensor in the cabin. The camera device or the in-cabin vision sensor includes but is not limited to: camera sensors integrated or mounted on an on-board screen or mounted inside the vehicle cabin, for example Red Green Blue (RGB) cameras, red green blue-infrared (RGB-IR) cameras, time of flight (TOF) cameras.
In some possible implementations, the actual position of the user in the cabin may also be determined by detecting whether a user is present in a seat via a lidar integrated or mounted on the vehicle screen or mounted inside the vehicle cabin, a transceiver device (including but not limited to millimeter wave radar or centimeter wave radar) on or at the edge of the vehicle screen, an infrared sensing device (including but not limited to infrared range finder, laser range finder), eye tracker, etc. integrated on or at the edge of the vehicle screen.
For example, the first position of the first user is determined by a pressure sensor disposed at the seat, which may be specifically: and when the pressure at a certain seat is greater than or equal to a preset threshold value, confirming that a user seat is positioned at the seat. Illustratively, the preset threshold may be 100 newtons (newtons, N), or 200N, or other values.
Optionally, the first window is controlled to display the first image information according to a second position where the device for detecting or identifying the first image information is located, where the second position corresponds to the first window.
For example, the "second position corresponds to a first window" may include: when the second position is a position in front of the vehicle, the first window can be a front windshield glass; when the second position is the position on the left side of the vehicle, the first window can be a main driving place window and/or a second row of left windows; when the second position is a position on the right side of the vehicle, the first window may be a passenger window and/or a second row of right windows.
In some possible implementations, the first window is controlled to display the first image based on a first location of the first user in the cabin and a second location where a device that detects or identifies the first image information is located. In one example, the first position is a primary driving position, and it is preliminarily determined that the first image is displayed on the front windshield and/or the left front window. Further, if the second position is a position on the left side of the vehicle, it is determined that the first image is displayed on the front left window, that is, the first window is the front left window.
Optionally, the first window of the vehicle displays the first image information, including: and controlling a first area of the first window to display the first image information, wherein the first area corresponds to a third position of the first user in the cockpit.
Illustratively, the first window includes a front windshield, which may include zone one, zone two, and zone three. The first area is an area on the left side of the front windshield and corresponds to a main driving position; the second area is the right area of the front windshield and corresponds to the position of the copilot; and the third area is the middle area of the front windshield and corresponds to the position of the rear row.
It should be understood that the correspondence between the "position" and the "first window" or the "first region of the first window" is merely an exemplary illustration, and the correspondence therebetween may be in other forms in a specific implementation process.
Optionally, the controlling the first window of the vehicle to display the first image information includes: and controlling the first image information to be displayed on the first window through the HUD.
In some possible implementations, the first image information may be displayed on the first window through a thin film transistor liquid crystal display (TFT) technology, that is, information is projected after light emitted from a Light Emitting Diode (LED) passes through a liquid crystal cell, or the first image information may be displayed on the first window through a Digital Light Processing (DLP) technology, or the first image information may be projected on the first window through a micro-electro-mechanical system (MEMS) projection system, where the MEMS projection system uses a red, green, and blue (three primary colors) laser with high power as a light source, and the laser is projected on the display screen after being integrated and scanned through corresponding optical elements and a processing chip.
Optionally, after the first window of the vehicle is controlled to display the first image information, the position of the first window and/or the content of the first image information is prompted.
Optionally, in response to a first input of the first user, the first window is controlled to be switched to a second window of the vehicle to display the first image information.
Illustratively, the first input may be a voice instruction input, or a voice instruction may also be input by a physical function key, or an input to an in-vehicle screen.
For example, the second window may comprise at least one window different from the first window, of a front windshield, a rear windshield, a main driver window, a passenger driver window, a left rear window of the cabin, and a right rear window of the cabin.
According to the method for controlling display, the vehicle window can be used as a medium to display information externally, non-contact information verification is achieved, and improvement of comfort level and safety of drivers and passengers is facilitated.
Examples of applications of the method 400 in different application scenarios are described in detail below with reference to fig. 5 to 7.
In an example, when a vehicle wants to enter a certain cell or a certain parking lot, as shown in fig. 5 (a), the vehicle travels to a barrier (or a car stopper), and a "please show health code" is displayed at a brake bar support of the barrier as shown in 501. The vehicle can acquire image information outside the vehicle through the camera device, recognize the acquired image and extract information of 'please show the health code', and further acquire the health code information of a first user according to the identity information of the first user in the vehicle; or after the first user of the vehicle sees the content shown in 501, a voice instruction "please display the health code on the vehicle window" is sent, and after the vehicle receives the voice instruction, the health code information of the first user is obtained according to the identity information of the first user.
In another example, when the vehicle drives away from a parking lot in advance, as shown in fig. 5 (b), the vehicle travels to a barrier (or a car stopper), and the brake lever bracket displays "you have parked for 6 hours and 22 minutes, and pay 21 yuan, please show a payment code" as shown by 502. The vehicle can acquire image information outside the vehicle through the camera device, recognize the acquired image and extract information of 'please show the payment code', and further acquire the payment code information of the first user according to the identity information of the first user in the vehicle. Or after the first user of the vehicle sees the content shown by 502, a voice command is sent to 'please display the payment code on the window', and after the vehicle receives the voice command, the payment code information of the first user is obtained according to the identity information of the first user.
Illustratively, the health code information or payment code information of the first user is stored in the vehicle, and then the vehicle can search and determine the health code information or payment code information of the first user in the vehicle according to the identity information of the first user; or the health code information or the payment code information of the first user is stored in the mobile terminal of the first user, and the vehicle acquires the health code information or the payment code information of the first user from the mobile terminal of the first user according to the identity information of the first user.
For example, before acquiring the health code information or the payment code information of the first user according to the identity information of the first user, the identity information of the first user may be confirmed. For example, the identity of the first user may be identified from voice instructions or from a captured face image.
Further, the vehicle control displays the first image information generated based on the health code information or the payment code information of the first user on a first window.
In one example, the vehicle controls the display of the first image information at the front windshield, and the first image information may be displayed on an area corresponding to the main driving on the front windshield, for example. The first image information may be displayed in response to an instruction related to "please show the payment code", as shown in fig. 6 (a), as a view angle in the vehicle cabin, and may include the payment code 601, or may also include the payment code 601 and a prompt text "please scan here the payment code" 602. Wherein, the suggestion characters are used for to the personnel outside the vehicle suggestion two-dimensional code that shows on this door window is the payment code. Fig. 6 (b) shows a view from outside the vehicle, in which the two-dimensional code 603 and the text 604 are the display effects of the payment code 601 and the prompt text 602, respectively, from the view from outside the vehicle.
In yet another example, the first image information may be displayed in response to an instruction related to "please show health code", as shown in (c) of fig. 6, as a view angle in the vehicle cabin, and may include the health code 605, or may also include the health code 605 and name information "@ empty" 606 of the user to which the health code belongs. Fig. 6 (d) shows the perspective outside the vehicle, where the two-dimensional code 607 and the text 608 are the display effects of the health code 605 and the text 606, respectively, from the perspective outside the vehicle.
In another example, the first window displaying the first image information may be determined according to the position of the notice board. As shown in fig. 6 (c), if a notice board for displaying two-dimensional code information or other information required for traffic is required to be located on the left side of the vehicle, the left front window and/or the left rear window may be controlled to display the first image information.
In yet another example, the first window displaying the first image information may be determined according to a voice instruction of the first user. As shown in (e) of fig. 6, the first user includes a main driving user, and when a voice instruction "please display the health code in the front left window" issued by the main driving user is detected, the first user controls the front left window to display the first image information, specifically as shown in (f) of fig. 6. Alternatively, the first window may be determined according to the detected area where the voice command "please display the two-dimensional code on the window" is located, for example, if the voice command is from the main driving area, the main driving window (i.e. the front left window in the above embodiment) is determined to be the first window; or the voice command is derived from the passenger driving area, the passenger driving window (i.e., the front right window in the above-described embodiment) is determined as the first window. Further, the first image information is controlled to be displayed on the first window.
In yet another example, when there are two or more users in the vehicle, the first window may be controlled to display information of the two or more users. For example, there are users ". Air", ". Rice", ". Day", in the vehicle, located at the primary driving position, the secondary driving position, the second row right position and the second row left position of the vehicle cabin, respectively. In some possible implementations, as shown in fig. 6 (g), the front windshield is controlled to sequentially display health codes of "empty", "rice", "day", and "qi". In some possible implementations, multiple windows may be controlled to display information for the two or more users. As shown in fig. 6 (h), the health codes of the users located at the main driving position, the passenger driving position, the second-row right position, and the second-row left position of the vehicle cabin are displayed on the front left window, the front right window, the rear right window, and the rear left window, respectively.
In some possible implementation manners, when two or more users exist in the vehicle, after the two-dimensional code information of each of the two or more users is acquired according to the identity information of the two or more users, a two-dimensional code is synthesized according to the two-dimensional code information of the two or more users, and the vehicle window is controlled to display the two-dimensional code.
In some possible implementations, when there are two or more users in the vehicle, only the two-dimensional code associated with the primary driving user may also be displayed. For example, after a two-dimensional code is generated according to the identity information of the user at the main driving position, a window is controlled to display the two-dimensional code.
In some possible implementations, when entering a certain place, a specific two-dimensional code may need to be scanned for a worker or a machine to register (or check). Illustratively, the "specific two-dimensional code" is a two-dimensional code specific to the place, identifies information such as the place position and the place name, and can realize automatic registration of information for people entering or exiting the place after a user scans the specific two-dimensional code. In some implementations, this "specific two-dimensional code" may be referred to as a "venue code. In the above scenario, after the specific two-dimensional code is scanned, a two-dimensional code with time information or other registration codes are generated for verification by a machine or a worker in the location. In the embodiment of the application, after a user in the vehicle uses the mobile terminal to scan the specific two-dimensional code and the mobile terminal generates the two-dimensional code with the time information or other registration codes, the vehicle can control and display the two-dimensional code with the time information or other registration codes. As shown in fig. 7 (a), when the vehicle travels to a barrier (or car stopper), a "please scan two-dimensional code" shown as 701 is displayed on a bar support of the barrier. Further, as shown in fig. 7 (b), the user in the vehicle scans the two-dimensional code using the mobile terminal according to a prompt of 701, and then jumps to an interface shown in fig. 7 (c). The interface shown in fig. 7 (c) includes a health code 702, name information (null) 703 of the user, health code status information (green code) 704, time information (xxxx.xx.xx: xx: xx) 705, and place name information (xx mall) 706. Further, the mobile terminal sends the information contained in the interface to the vehicle. Further, as shown in (d) of fig. 7, the vehicle control displays a health code shown at 702 on the window. In some possible implementations, the vehicle may also control to display a screenshot image of the interface shown in (c) in fig. 7 on a window of the vehicle.
It should be understood that the method of controlling display in the above embodiment is described by taking the 5-seat vehicle shown in fig. 2 as an example, and the embodiment of the present application is not limited thereto. For example, for a 7 Sport Utility Vehicle (SUV), a vehicle with more than 7 seats, the first window may also include more windows.
In the embodiments of the present application, unless otherwise specified or conflicting with respect to logic, the terms and/or descriptions in various embodiments have consistency and may be mutually cited, and technical features in different embodiments may be combined to form a new embodiment according to their inherent logic relationship.
The method provided by the embodiment of the present application is described in detail above with reference to fig. 4 to 7. The apparatus provided by the embodiment of the present application will be described in detail below with reference to fig. 8 and 9. It should be understood that the description of the apparatus embodiments corresponds to the description of the method embodiments, and therefore, for brevity, details are not repeated here, since the details that are not described in detail may be referred to the above method embodiments.
Fig. 8 shows a schematic block diagram of an apparatus 2000 for controlling a display according to an embodiment of the present application, where the apparatus 2000 includes a generating unit 2010 and a processing unit 2020.
Optionally, the apparatus 2000 may further include a storage unit, which may be configured to store instructions and/or data, and the processing unit 2020 may read the instructions and/or data in the storage unit, so as to enable the apparatus to implement the foregoing method embodiments.
The apparatus 2000 may include means for performing the method of fig. 4. Also, the units in the apparatus 2000 and other operations and/or functions described above are respectively for implementing the corresponding flows of the method embodiments in fig. 4.
The apparatus 2000 comprises: a generating unit 2010 configured to generate first image information according to the received first instruction and identity information of the first user in the cabin of the vehicle; the processing unit 2020 is configured to control a first window of the vehicle to display the first image information.
Optionally, the processing unit 2020 is configured to: and controlling the first window to display the first image information according to a first position of the first user in the cabin, wherein the first position corresponds to the first window.
Optionally, the processing unit 2020 is configured to: and controlling the first window to display the first image information according to a second position where the equipment for detecting or identifying the first image information is located, wherein the second position corresponds to the first window.
Optionally, the processing unit 2020 is configured to: and controlling a first area of the first window to display the first image information, wherein the first area corresponds to a third position of the first user in the cockpit.
Optionally, the processing unit 2020 is configured to: control displays the first image information on the first window via the HUD.
Optionally, the apparatus further includes a prompting unit configured to prompt a location of the first window and/or content of the first image information.
Optionally, the processing unit 2020 is configured to: and responding to a first input of the first user, and controlling the first window to be switched to a second window of the vehicle to display the first image information.
Optionally, the first window comprises at least one of: the front windshield, the rear windshield, the main driving place window, the auxiliary driving place window, the rear row left side window of the cabin and the rear row right side window of the cabin.
Optionally, the first instruction comprises at least one of: a voice instruction, an instruction generated according to an input to an in-vehicle screen of the vehicle, an instruction generated according to an operation of a physical function key of the vehicle, and an instruction generated according to an image recognition result.
Optionally, the first image information includes a two-dimensional code image.
It should be understood that the division of each unit in the above apparatus is only a division of a logical function, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. Furthermore, the units in the device may be implemented in the form of processor call software; for example, the apparatus includes a processor, the processor is connected to a memory, the memory stores instructions, the processor calls the instructions stored in the memory to implement any one of the above methods or implement the functions of each unit of the apparatus, wherein the processor is a general-purpose processor, such as a CPU or a microprocessor, and the memory is a memory inside the apparatus or a memory outside the apparatus. Alternatively, the units in the apparatus may be implemented in the form of hardware circuits, and the functions of some or all of the units may be implemented by designing hardware circuits, which may be understood as one or more processors; for example, in one implementation, the hardware circuit is an ASIC, and the functions of some or all of the above units are implemented through the design of the logical relationship of elements in the circuit; for another example, in another implementation, the hardware circuit may be implemented by a PLD, for example, an FPGA may include a large number of logic gates, and the connection relationship between the logic gates is configured by a configuration file, so as to implement the functions of some or all of the units. All the units of the above device can be implemented in the form of calling software by a processor, or in the form of hardware circuit, or in the form of calling software by a processor, and the rest is implemented in the form of hardware circuit.
In the embodiment of the present application, the processor is a circuit having a signal processing capability, and in one implementation, the processor may be a circuit having an instruction reading and executing capability, such as a CPU, a microprocessor, a GPU, a DSP, or the like; in another implementation, the processor may implement certain functions through the logical relationship of hardware circuits, which may be fixed or reconfigurable, such as a hardware circuit implemented by an ASIC or PLD, such as an FPGA. In the reconfigurable hardware circuit, the process of loading the configuration document by the processor to implement the configuration of the hardware circuit may be understood as a process of loading instructions by the processor to implement the functions of some or all of the above units. Furthermore, it may be a hardware circuit designed for artificial intelligence, which may be understood as an ASIC, such as an NPU, TPU, DPU, etc.
It is seen that the units in the above apparatus may be one or more processors (or processing circuits) configured to implement the above method, for example: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA, or a combination of at least two of these processor forms.
In addition, all or part of the units in the above apparatus may be integrated together, or may be implemented independently. In one implementation, these units are integrated together, implemented in the form of a system-on-a-chip (SOC). The SOC may include at least one processor for implementing any one of the above methods or implementing functions of each unit of the apparatus, and the at least one processor may be of different types, and for example, includes a CPU and an FPGA, a CPU and an artificial intelligence processor, a CPU and a GPU, and the like.
In a specific implementation, the operations performed by the generating unit 2010 and the processing unit 2020 may be performed by the same processor, or may be performed by different processors, for example, by multiple processors respectively. For example, one or more processors may be coupled to one or more sensors of the sensing system 120 of fig. 1 to obtain and process user face information or voice information from the one or more sensors. Or one or more processors generate first image information according to the face information or the voice information of the user; alternatively, the one or more processors may also be connected to one or more windows of the display device 130, so as to control the windows to display the first image information. For example, in a specific implementation process, the one or more processors may be processors disposed in a car machine, or may also be processors disposed in other car terminals. For example, in a specific implementation process, the apparatus 2000 may be a chip disposed in a vehicle or other vehicle-mounted terminal. For example, in a specific implementation, the apparatus 2000 may be the computing platform 150 provided in a vehicle as shown in fig. 1.
The embodiment of the present application further provides an apparatus, which includes a processing unit and a storage unit, where the storage unit is configured to store instructions, and the processing unit executes the instructions stored in the storage unit, so as to enable the apparatus to perform the method or steps performed by the foregoing embodiment.
Fig. 9 is a schematic block diagram of an apparatus for controlling a display according to an embodiment of the present application. The apparatus 2100 for controlling display shown in fig. 9 may include: a processor 2110, a transceiver 2120, and a memory 2130. The processor 2110, the transceiver 2120 and the memory 2130 are connected via an internal connection path, the memory 2130 is used for storing instructions, and the processor 2110 is used for executing the instructions stored in the memory 2130 to receive/transmit part of the parameters by the transceiver 2120. Optionally, the memory 2130 can be coupled to the processor 2110 via an interface or can be integrated with the processor 2110.
It should be noted that the transceiver 2120 may include, but is not limited to, a transceiver such as an input/output interface (i/o interface) to enable communication between the apparatus 2100 and other devices or a communication network.
In implementation, the steps of the above method can be implemented by integrated logic circuits of hardware in the processor 2110 or by instructions in the form of software. The method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in the memory 2130, and the processor 2110 reads information in the memory 2130, and combines hardware thereof to complete the steps of the above method. To avoid repetition, it is not described in detail here.
The processor 2110 may be a general-purpose CPU, a microprocessor, an ASIC, a GPU or one or more integrated circuits, and is configured to execute the related programs to implement the methods of the embodiments of the present application. The processor 2110 may also be an integrated circuit chip having signal processing capabilities. In a specific implementation, the steps of the method of the present application can be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 2110. The processor 2110 may also be a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware component. The various methods, steps and logic block diagrams disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 2130, and the processor 2110 reads the information in the memory 2130, and executes the method of the present application method embodiment in combination with the hardware thereof.
The memory 2130 may be a Read Only Memory (ROM), a static memory device, a dynamic memory device, or a Random Access Memory (RAM).
The transceiver 2120 enables communication between the device 2100 and other devices or a communication network using a transceiver device such as, but not limited to, a transceiver.
The embodiment of the present application also provides a mobile carrier, which may include the apparatus 2000 described above or the apparatus 2100 described above.
For example, the mobile carrier may be a vehicle in the above embodiments.
An embodiment of the present application further provides a computer program product, where the computer program product includes: computer program code which, when run on a computer, causes the computer to perform the method of fig. 4 described above.
Embodiments of the present application also provide a computer-readable storage medium, which stores program codes or instructions, and when the computer program codes or instructions are executed by a processor of a computer, the processor is caused to implement the method in fig. 4.
An embodiment of the present application further provides a chip, including: at least one processor and a memory, the at least one processor coupled to the memory for reading and executing instructions in the memory to perform the method of fig. 4 described above.
It should be understood that, for convenience and brevity of description, specific working procedures and beneficial effects of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not described herein again.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
It should be understood that in the embodiments of the present application, the memory may include a read-only memory (ROM), a Random Access Memory (RAM), and provide instructions and data to the processor.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one, two or more. The term "and/or" is used to describe the association relationship of the associated objects, and means that there may be three relationships; for example, a and/or B, may represent: a exists singly, A and B exist simultaneously, and B exists singly, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or parts of the technical solution may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (25)

1. A method of controlling a display, comprising:
generating first image information according to the received first instruction and identity information of a first user in a cabin of the vehicle;
and controlling a first window of the vehicle to display the first image information.
2. The method of claim 1, wherein the controlling the first window of the vehicle to display the first image information comprises:
controlling the first window to display the first image information according to a first position of the first user in the cabin, wherein the first position corresponds to the first window.
3. The method of claim 1 or 2, wherein the controlling the first window of the vehicle to display the first image information comprises:
and controlling the first window to display the first image information according to a second position where the device for detecting or identifying the first image information is located, wherein the second position corresponds to the first window.
4. The method of claim 1, wherein the controlling the first window of the vehicle to display the first image information comprises:
controlling a first area of the first window to display the first image information, the first area corresponding to a third location of the first user in the cockpit.
5. The method of any of claims 1-4, wherein the controlling a first window of the vehicle to display the first image information comprises: and controlling the first image information to be displayed on the first vehicle window through head-up display (HUD).
6. The method according to any one of claims 1 to 5, further comprising:
and prompting the position of the first window and/or the content of the first image information.
7. The method according to any one of claims 1 to 6, further comprising:
in response to a first input by the first user, controlling switching from the first window to a second window of the vehicle to display the first image information.
8. The method of any of claims 1-7, wherein the first window comprises at least one of: the passenger car comprises a front windshield, a rear windshield, a main driving place window, a subsidiary driving place window, a rear row left side window of the passenger car and a rear row right side window of the passenger car.
9. The method of any of claims 1-8, wherein the first instruction comprises at least one of: a voice instruction, an instruction generated according to an input to an on-vehicle screen of the vehicle, an instruction generated according to an operation of a physical function key of the vehicle.
10. The method according to any one of claims 1 to 9, wherein the first image information comprises a two-dimensional code image.
11. An apparatus for controlling a display, comprising:
a generating unit configured to generate first image information according to the received first instruction and identity information of a first user in a cabin of the vehicle;
and the processing unit is used for controlling a first window of the vehicle to display the first image information.
12. The apparatus of claim 11, wherein the processing unit is configured to:
controlling the first window to display the first image information according to a first position of the first user in the cabin, wherein the first position corresponds to the first window.
13. The apparatus according to claim 11 or 12, wherein the processing unit is configured to:
and controlling the first window to display the first image information according to a second position where the device for detecting or identifying the first image information is located, wherein the second position corresponds to the first window.
14. The apparatus of claim 11, wherein the processing unit is configured to:
controlling a first area of the first window to display the first image information, the first area corresponding to a third location of the first user in the cockpit.
15. The apparatus according to any of claims 11 to 14, wherein the processing unit is configured to: and controlling the first image information to be displayed on the first vehicle window through head-up display (HUD).
16. The apparatus according to any one of claims 11 to 15, further comprising a prompting unit for:
and prompting the position of the first window and/or the content of the first image information.
17. The apparatus according to any of claims 11 to 16, wherein the processing unit is configured to:
in response to a first input by the first user, controlling switching from the first window to a second window of the vehicle to display the first image information.
18. The apparatus of any of claims 11 to 17, wherein the first window comprises at least one of: the passenger car comprises a front windshield, a rear windshield, a main driving place window, a subsidiary driving place window, a rear row left side window of the passenger car and a rear row right side window of the passenger car.
19. The apparatus according to any one of claims 11 to 18, wherein the first instruction comprises at least one of: a voice instruction, an instruction generated according to an input to an in-vehicle screen of the vehicle, an instruction generated according to an operation of a physical function key to the vehicle.
20. The apparatus according to any one of claims 11 to 19, wherein the first image information comprises a two-dimensional code image.
21. An apparatus for controlling a display, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in the memory to cause the apparatus to perform the method of any of claims 1 to 10.
22. A mobile carrier comprising the apparatus of any one of claims 11 to 21.
23. The mobile carrier of claim 22, wherein the mobile carrier is a vehicle.
24. A computer readable storage medium having stored thereon instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 10.
25. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface to perform the method of any one of claims 1 to 10.
CN202210961076.5A 2022-08-11 2022-08-11 Method, device and mobile carrier for controlling display Pending CN115534822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210961076.5A CN115534822A (en) 2022-08-11 2022-08-11 Method, device and mobile carrier for controlling display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210961076.5A CN115534822A (en) 2022-08-11 2022-08-11 Method, device and mobile carrier for controlling display

Publications (1)

Publication Number Publication Date
CN115534822A true CN115534822A (en) 2022-12-30

Family

ID=84724173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210961076.5A Pending CN115534822A (en) 2022-08-11 2022-08-11 Method, device and mobile carrier for controlling display

Country Status (1)

Country Link
CN (1) CN115534822A (en)

Similar Documents

Publication Publication Date Title
CN108137050B (en) Driving control device and driving control method
US11281944B2 (en) System and method for contextualized vehicle operation determination
CN108137052B (en) Driving control device, driving control method, and computer-readable medium
US20170343375A1 (en) Systems to dynamically guide a user to an autonomous-driving vehicle pick-up location by augmented-reality walking directions
US10882398B2 (en) System and method for correlating user attention direction and outside view
EP2857886B1 (en) Display control apparatus, computer-implemented method, storage medium, and projection apparatus
CN106240457B (en) Vehicle parking assistance device and vehicle
CN107097793A (en) Driver assistance and the vehicle with the driver assistance
KR20170016174A (en) Driver assistance apparatus and control method for the same
CN112750206A (en) Augmented reality wearable system for vehicle occupants
CN106467060A (en) Display device and the vehicle including this display device
KR20170048781A (en) Augmented reality providing apparatus for vehicle and control method for the same
US11752940B2 (en) Display controller, display system, mobile object, image generation method, and carrier means
KR20150087985A (en) Providing Apparatus and the Method of Safety Driving Information
US11151775B2 (en) Image processing apparatus, display system, computer readable recoring medium, and image processing method
US20210229553A1 (en) Display control device, display control method, and program
US20220381913A1 (en) Distance measurement sensor, signal processing method, and distance measurement module
KR101822896B1 (en) Driver assistance apparatus and control method for the same
KR102446387B1 (en) Electronic apparatus and method for providing a text thereof
CN115534822A (en) Method, device and mobile carrier for controlling display
KR101816570B1 (en) Display apparatus for vehicle
WO2020008876A1 (en) Information processing device, information processing method, program, and mobile body
US11745647B2 (en) Method for sending information to an individual located in the environment of a vehicle
WO2024037003A1 (en) Navigation method and apparatus, and moving carrier
US20240004075A1 (en) Time-of-flight object detection circuitry and time-of-flight object detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination