WO2024032131A1 - 屏幕控制方法、车载设备、计算机存储介质及车辆 - Google Patents

屏幕控制方法、车载设备、计算机存储介质及车辆 Download PDF

Info

Publication number
WO2024032131A1
WO2024032131A1 PCT/CN2023/099280 CN2023099280W WO2024032131A1 WO 2024032131 A1 WO2024032131 A1 WO 2024032131A1 CN 2023099280 W CN2023099280 W CN 2023099280W WO 2024032131 A1 WO2024032131 A1 WO 2024032131A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth information
preset
preset area
sub
screen
Prior art date
Application number
PCT/CN2023/099280
Other languages
English (en)
French (fr)
Inventor
纪德威
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2024032131A1 publication Critical patent/WO2024032131A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present disclosure relates to the field of intelligent driving technology, and specifically relates to a screen control method, a vehicle-mounted device, a computer storage medium and a vehicle.
  • the present disclosure provides a screen control method, a vehicle-mounted device, a computer storage medium and a vehicle.
  • embodiments of the present disclosure provide a screen control method.
  • the method includes: detecting depth information between the screen and objects in a preset area; determining whether there is a human body in the preset area based on the detected depth information; When there is a human body in the preset area, the switch state of the control screen is on; when it is determined that there is no human body in the preset area, the switch state of the control screen is off.
  • embodiments of the present disclosure provide a vehicle-mounted device, including: one or more processors; a storage device on which one or more programs are stored; when one or more programs are executed by one or more processors When, one or more processors are enabled to implement the previous screen control method.
  • embodiments of the present disclosure provide a computer storage medium on which a computer program is stored, wherein when the program is executed, the screen control method as described above is implemented.
  • inventions of the present disclosure provide a vehicle.
  • the vehicle includes a vehicle-mounted device, and the vehicle-mounted device is the vehicle-mounted device as described above.
  • Figure 1 is a schematic flowchart 1 of a screen control method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart 2 of a screen control method provided by an embodiment of the present disclosure
  • Figure 3 is a flowchart three of a screen control method provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic flow chart 4 of a screen control method provided by an embodiment of the present disclosure.
  • Figure 5 is a schematic flowchart 5 of a screen control method provided by an embodiment of the present disclosure.
  • Figure 6 is a schematic flow chart 6 of a screen control method provided by an embodiment of the present disclosure.
  • Figure 7 is a schematic flowchart 7 of a screen control method provided by an embodiment of the present disclosure.
  • Figure 8 is a schematic flowchart 8 of a screen control method provided by an embodiment of the present disclosure.
  • Figure 9 is a Depth diagram of an 8*8 multi-point TOF sensor provided by an embodiment of the present disclosure.
  • Figure 10 is a screen control workflow diagram provided by an embodiment of the present disclosure.
  • Embodiments described herein may be described with reference to plan and/or cross-sectional illustrations, with the aid of idealized schematic illustrations of the present disclosure. therefore, Example illustrations may be modified based on manufacturing techniques and/or tolerances. Therefore, the embodiments are not limited to those shown in the drawings but include modifications of configurations formed based on the manufacturing process. Accordingly, the regions illustrated in the figures are of a schematic nature and the shapes of the regions shown in the figures are illustrative of the specific shapes of regions of the element and are not intended to be limiting.
  • the current mainstream car screen control is the always-on mode, that is, after the vehicle is started, the car screen (such as the display screen in front of the passenger or rear seat) is always in working state.
  • the car screen such as the display screen in front of the passenger or rear seat
  • power consumption is also increasing.
  • car screen control usually adopts the always-on mode, that is, after the vehicle is started, the screen is always working regardless of whether there is a user sitting on the seat. Over time, the power consumption of the car is also becoming larger and larger.
  • embodiments of the present disclosure propose that intelligent control of the screen can be realized by real-time monitoring of whether a user is sitting on the seat in front of the screen, and the screen is turned on only when a user is seated on the seat in front of the screen. It can reduce the power consumption of the car, thereby achieving the purpose of saving energy.
  • Embodiments of the present disclosure further propose that, specifically, it can be determined whether there is a user sitting on the seat in front of the screen by periodically detecting the depth information between the screen and objects in the preset area.
  • an embodiment of the present disclosure provides a screen control method, which may include the following steps S11 to S14.
  • step S11 depth information between the screen and objects in the preset area is detected.
  • step S12 it is determined whether there is a human body in the preset area based on the detected depth information.
  • step S13 when it is determined that the human body exists in the preset area, step S13 is executed; when it is determined that the human body does not exist in the preset area, step S14 is executed.
  • step S13 when it is determined that a human body exists in the preset area, the switch state of the control screen is turned on.
  • step S14 when it is determined that there is no human body in the preset area, the switch state of the control screen is turned off.
  • the preset area can be the area in front of the screen including the seat; the depth information is distance information, and the English name is Depth. When Depth is mentioned later, it refers to the depth information.
  • a human body when a human body is present in the preset area, it may indicate that a user is sitting on the seat; when there is no human body in the preset area, it indicates that no user has sat on the seat or the user who has sat down before has already leave.
  • controlling the switch state of the screen to be the on state may include the following operations. For example, before step S13, if the switch state of the screen is already on, there is no need to change the switch state of the screen and continue to keep the screen on; if the switch state of the screen is off, step S13 can generate a control screen by The first control signal is turned on, and the screen will turn on and display according to the instructions of the first control signal for the user to view menus, click operations, and so on.
  • controlling the switch state of the screen to be in the off state may include the following operations. Before step S13, if the switch state of the screen is already off, there is no need to change the switch state of the screen and continue to keep the screen on; if the switch state of the screen is on, step S13 can control the screen off by generating a The second control signal is used. When it is determined that there is no human body in the preset area, a second control signal is generated to control the screen to turn off, and the screen is turned off in time, which can further reduce the power consumption of the car.
  • step S11 can detect the depth information between the screen and the object in the preset area under preset conditions; it can also be detected in real time or periodically.
  • step S12 may be: determining whether there is a human body in the preset area within a preset time period based on the detected depth information. For example, by comparing the detected depth information of two adjacent preset time periods, it is determined whether there is a human body in the preset area within the next preset time period.
  • step S11 may be performed by a time of flight (TOF) sensor.
  • the TOF sensor can first emit laser light forward, then receive the laser light reflected from the object in front, and finally calculate the flight distance based on the time difference between emitting the laser light and receiving the reflected laser light, thereby obtaining depth information.
  • the TOF sensor divides the plane into multiple areas, such as 4*4 areas and 8*8 areas. It emits laser light forward and reflects the laser light back from the object in front to realize the TOF interaction between objects in different areas. Distance measurement between sensors is usually used in auxiliary focusing of terminal devices such as mobile phones.
  • the embodiment of the present disclosure installs a TOF sensor on the car screen to detect the depth information between the screen and the object in front of the screen in real time, thereby determining whether there is a user sitting on the seat.
  • the in-vehicle multimedia system and the TOF sensor are started one after another, the TOF sensor begins to monitor depth information in real time.
  • the screen display can be automatically turned on.
  • the screen display can be automatically turned off. It can reduce power consumption and achieve a good interactive experience between people and vehicles.
  • the screen control method detects the depth information between the screen and the objects in the preset area, and determines whether there is a human body in the preset area based on the detected depth information; Only when it is determined that there is a human body in the preset area, the switch state of the screen is controlled to be on. Only when it is determined that there is no human body in the preset area, the switch state of the screen is controlled to be off. It can effectively reduce vehicle power consumption to save energy, and can also provide a good human-vehicle interaction experience.
  • the step of detecting depth information between the screen and the object in the preset area is performed multiple times. As shown in Figure 2, it is determined whether there is a human body in the preset area based on the detected depth information (ie, step S12). The following steps S21 to S22 may be included.
  • step S21 it is determined whether the currently detected depth information changes based on the last detected depth information.
  • step S22 if it is determined that the currently detected depth information has changed, it is determined whether there is a human body in the preset area based on the currently detected depth information.
  • each time the step of detecting the depth information between the screen and the object in the preset area is executed it may be executed when the preset period arrives, or it may be executed when some preset conditions are met, for example, preset The conditions can be: door opening, vehicle starting, seat angle changing, etc.
  • the last detected depth information is compared with the currently detected depth information.
  • the comparison result is inconsistent, When, it means that the currently detected depth information has changed.
  • the TOF sensor divides the plane into multiple areas and detects the distance to objects in each area.
  • each detected depth information corresponds to each sub-preset area one-to-one.
  • the depth information between the screen and the object in the preset area is detected may include the following steps S31 to S32.
  • step S31 the preset area is evenly divided into sub-preset areas.
  • step S32 depth information between the screen and objects in each sub-preset area is detected.
  • determining whether the currently detected depth information has changed based on the last detected depth information may include the following steps S41 to S43 .
  • step S41 the absolute value of the first depth information difference between each depth information currently detected and the corresponding depth information detected last time is calculated.
  • step S42 the number of first depth information difference absolute values greater than the first preset difference threshold is calculated.
  • step S43 if the number is greater than the preset number threshold, it is determined that the currently detected depth information has changed.
  • the "corresponding depth information detected last time” refers to the depth information corresponding to the same sub-preset area as "each depth information currently detected”.
  • the absolute value of the depth information difference refers to the absolute value of the difference between two depth information.
  • the first preset difference threshold can be expressed as Dt, which is obtained by collecting data for training in advance.
  • the embodiment of the present disclosure has no specific limit on the preset quantity threshold, as long as the preset quantity threshold is greater than 2.
  • the TOF sensor works according to a certain frame rate, usually 15 frames/s or 30 frames/s. speed (equivalent to a cycle interval of 3/200ms or 3/100ms), and sequentially record the Depth detected based on the acquired image frames in each cycle.
  • the TOF sensor dividing the preset area into 8*8 sub-preset areas as an example, the Depth detected in the previous cycle can be recorded as ⁇ Dp1, Dp2,...Dpm ⁇ , and then the Depth detected in the current cycle can be recorded.
  • the Depth is recorded as ⁇ Dn1, Dn2,...Dnm ⁇ , where m represents the number of TOF points (that is, the total number of sub-preset areas), which is 64.
  • m represents the number of TOF points (that is, the total number of sub-preset areas), which is 64.
  • determining whether there is a human body in the preset area based on the currently detected depth information may include the following steps:
  • step S51 determine the absolute value of the second depth information difference corresponding to each sub-preset area based on the currently detected depth information
  • step S52 if the absolute value of any second depth information difference is greater than the second preset difference threshold, Determine whether there is a human body in the preset area according to the sub-preset area whose absolute value of the second depth information difference is greater than the second preset difference threshold.
  • the absolute value of the second depth information difference is used to represent the difference between the Depth of the sub-preset area and the Depth of the adjacent sub-preset area.
  • the Depth difference between the two sub-preset areas is large, it is necessary to extract If the sub-preset area is significantly different from the adjacent sub-preset area, further judgment will be made based on the sub-preset area.
  • step S52 Described it is determined whether there is a human body in the preset area according to the sub-preset area whose absolute value of the second depth information difference is greater than the second preset difference threshold (i.e., in step S52 Described) may include the following steps S61 to S62.
  • step S61 the suspected head-shoulder ratio is determined based on the number of sub-preset regions in the preset direction whose absolute value of the second depth information difference is greater than the second preset difference threshold.
  • step S62 when the suspected head-to-shoulders ratio is within the preset head-to-shoulders ratio interval, it is determined that a human body exists in the preset area.
  • the preset direction can be horizontal, of course, or vertical. Since the preset area is evenly divided into multiple sub-preset areas, the multiple sub-preset areas are not only arranged in the preset direction, but also perpendicular to the preset direction. Arranged in the direction, the number of sub-preset regions in the preset direction is multiple, the absolute value of the second depth information difference being greater than the second preset difference threshold.
  • the suspected head-to-shoulder ratio is then determined based on the suspected head width and suspected shoulder width.
  • the suspected head and shoulders are determined based on the number of sub-preset areas in the preset direction whose absolute value of the second depth information difference is greater than the second preset difference threshold.
  • the ratio (ie step S61) may include the following steps S71 to S72.
  • step S71 determine the suspected head width and suspected shoulder width based on the number of sub-preset areas in the preset direction whose absolute value of the second depth information difference is greater than the second preset difference threshold;
  • step S72 the suspected head-to-shoulder ratio is determined based on the suspected head width and the suspected shoulder width.
  • determining the absolute value of the second depth information difference corresponding to each sub-preset area according to the currently detected depth information may include the following steps S81 to S82 .
  • step S81 according to the currently detected depth information, determine the location of each sub-preset area and the sub-preset area.
  • step S82 the maximum value among the third depth information difference absolute values corresponding to each sub-preset area is determined as the second depth information difference absolute value corresponding to the sub-preset area.
  • each sub-default area has 3 or 4 adjacent sub-preset areas, for each sub-preset area, the absolute value of the Depth difference between each adjacent sub-preset area is determined. This type of absolute value of the Depth difference is called the third The absolute value of the depth information difference, the number is 3 or 4, and then the maximum value is taken from the 3 or 4 third depth information difference absolute values, as the second depth information difference absolute value of the sub-preset area value.
  • the absolute value of the difference is obtained as the absolute value of the second depth information difference of the 64 sub-preset areas: ⁇ 1, ⁇ 2,... ⁇ 64 ⁇ .
  • the maximum value ⁇ n of the Depth difference when the user leaves the seat is obtained by collecting data for training in advance, which is called the second preset difference threshold. When there is ⁇ greater than ⁇ n in ⁇ 1, ⁇ 2,... ⁇ 64 ⁇ , it means that the object in the preset area is suspected to be a human body.
  • the suspected head width and the suspected shoulder width can be calculated.
  • the suspected head width and suspected shoulder width are 2 and 7 respectively.
  • the suspected head-shoulders ratio R falls within the head-shoulders ratio interval trained in advance by collecting a large amount of data, it can be determined that a human body exists in the preset area.
  • the head and shoulders of the human body are not of the same width, and the overall width of the head is narrower and the width of the shoulders is wider. This trend does not occur with other objects or pets. Pet heads have a narrower width. The width of the head and the width of the shoulders are close to the same. Assume that the preset head-to-shoulder ratio interval is [Rmin, Rmax].
  • determining whether a human body exists in the preset area based on the currently detected depth information may also include the following steps: in the case where the absolute value of any second depth information difference is not greater than the second preset difference threshold. Next, make sure there is no human body in the preset area.
  • the object in the preset area is not a human body, but may be an object or pet sitting.
  • a screen control workflow diagram may include the following steps: start the vehicle; start the multimedia system; start the TOF sensor; periodically detect the Depth of the TOF sensor; determine the depth detected in the current cycle Whether the obtained Depth changes, if so, determine whether there is a human body in the preset area, otherwise continue to detect the Depth; if it is determined that there is a human body in the preset area and the screen is currently off, turn on the screen, and then continue to detect the Depth; if If it is determined that there is no human body in the preset area and the screen is currently on, the screen will be turned off, and then the Depth will continue to be detected.
  • embodiments of the present disclosure also provide a vehicle-mounted device, which may include: one or more processors; a storage device on which one or more programs are stored; when one or more programs are executed by one or more processors when one or more processors are enabled to implement the screen control method as described above.
  • embodiments of the present disclosure also provide a computer storage medium on which a computer program is stored, wherein when the program is executed, the screen control method as described above is implemented.
  • an embodiment of the present disclosure also provides a vehicle.
  • the vehicle includes a vehicle-mounted device, and the vehicle-mounted device is the vehicle-mounted device as described above.
  • the vehicle further includes a time-of-flight TOF sensor, the TOF sensor is used to emit an optical signal and receive The light-receiving signal is a reflected signal reflected by objects in the preset area; the processor is used to determine the depth information between the screen and the object in the preset area based on the time interval between emitting the light signal and receiving the reflected signal.
  • a time-of-flight TOF sensor the TOF sensor is used to emit an optical signal and receive
  • the light-receiving signal is a reflected signal reflected by objects in the preset area
  • the processor is used to determine the depth information between the screen and the object in the preset area based on the time interval between emitting the light signal and receiving the reflected signal.
  • the switch state of the screen is controlled. It is in the on state. Only when it is determined that there is no human body in the preset area, the switch state of the screen is controlled to the off state. This can not only effectively reduce the power consumption of the car to save energy, but also provide a good view of people and vehicles. interactive experience.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. removable, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
  • Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a general illustrative sense only and not for purpose of limitation. In some instances, it will be apparent to those skilled in the art that features, characteristics and/or elements described in connection with a particular embodiment may be used alone, or may be used in conjunction with other embodiments, unless expressly stated otherwise. Features and/or components used in combination. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made without departing from the scope of the present disclosure as set forth in the appended claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

本公开提供一种屏幕控制方法,该方法包括:检测屏幕与预设区域内物体之间的深度信息;根据检测到的深度信息判断预设区域内是否存在人体;在判断出预设区域内存在人体的情况下,控制屏幕的开关状态为开启状态;在判断出预设区域内不存在人体的情况下,控制屏幕的开关状态为关闭状态。本公开还提供一种车载设备、计算机存储介质及车辆。

Description

屏幕控制方法、车载设备、计算机存储介质及车辆
相关申请的交叉引用
本公开要求享有2022年08月12日提交的名称为“屏幕控制方法、车载设备、计算机存储介质及车辆”的中国专利申请CN202210966460.4的优先权,其全部内容通过引用并入本公开中。
技术领域
本公开涉及智能驾驶技术领域,具体涉及一种屏幕控制方法、一种车载设备、一种计算机存储介质及一种车辆。
背景技术
随着新能源汽车技术的不断发展,汽车座舱智能化逐渐成为汽车工业的一个重要发展方向。汽车座舱智能化不仅体现在自动驾驶、辅助驾驶上,还体现在多媒体娱乐上。汽车已经不仅仅作为一种代步工具,更逐渐成为休闲娱乐的重要场所。人与汽车之间的信息交互越来越多,车载屏幕就成为最重要的一种信息交互工具,随着车载屏幕等交互工具越来越多,存在功耗越来越大的技术问题。
发明内容
本公开至少针对上述技术问题,提供一种屏幕控制方法、一种车载设备、一种计算机存储介质及一种车辆。
第一方面,本公开实施例提供一种屏幕控制方法,方法包括:检测屏幕与预设区域内物体之间的深度信息;根据检测到的深度信息判断预设区域内是否存在人体;在判断出预设区域内存在人体的情况下,控制屏幕的开关状态为开启状态;在判断出预设区域内不存在人体的情况下,控制屏幕的开关状态为关闭状态。
第二方面,本公开实施例提供一种车载设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序;当一个或多个程序被一个或多个处理器执行时,使得一个或多个处理器实现如前屏幕控制方法。
第三方面,本公开实施例提供一种计算机存储介质,其上存储有计算机程序,其中,程序被执行时实现如前所述的屏幕控制方法。
第四方面,本公开实施例提供一种车辆,车辆包括车载设备,车载设备为如前所述的车载设备。
附图说明
图1是本公开实施例提供的屏幕控制方法的流程示意图一;
图2是本公开实施例提供的屏幕控制方法的流程示意图二;
图3是本公开实施例提供的屏幕控制方法的流程示意图三;
图4是本公开实施例提供的屏幕控制方法的流程示意图四;
图5是本公开实施例提供的屏幕控制方法的流程示意图五;
图6是本公开实施例提供的屏幕控制方法的流程示意图六;
图7是本公开实施例提供的屏幕控制方法的流程示意图七;
图8是本公开实施例提供的屏幕控制方法的流程示意图八;
图9是本公开实施例提供的一种8*8多点TOF传感器的Depth示意图;以及
图10是本公开实施例提供的屏幕控制工作流程图。
具体实施方式
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现且不应当被解释为限于本文阐述的实施例。反之,提供这些实施例的目的在于使本公开透彻和完整,并将使本领域技术人员充分理解本公开的范围。
如本文所使用的,术语“和/或”包括一个或多个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本公开中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加一个或多个其他特征、整体、步骤、操作、元件、组件和/或其群组。
本文所述实施例可借助本公开的理想示意图而参考平面图和/或截面图进行描述。因此, 可根据制造技术和/或容限来修改示例图示。因此,实施例不限于附图中所示的实施例,而是包括基于制造工艺而形成的配置的修改。因此,附图中例示的区具有示意性属性,并且图中所示区的形状例示了元件的区的具体形状,但并不旨在是限制性的。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
随着新能源汽车技术的不断发展,汽车座舱智能化逐渐成为汽车工业的一个重要发展方向。汽车座舱智能化不仅体现在自动驾驶、辅助驾驶上,还体现在多媒体娱乐上。汽车已经不仅仅作为一种代步工具,更逐渐成为休闲娱乐的重要场所。人与汽车之间的信息交互越来越多,车载屏幕就成为最重要的一种信息交互工具,随着车载屏幕等交互工具越来越多,功耗控制也变得越来越重要。如何实现既能提升人车交互体验又能降低功耗也成为越来越重要的一个工程课题。当前主流的汽车屏幕控制是常亮模式,即在车辆启动后,汽车屏幕(如副驾或者后排座椅前面的显示屏幕)一直处于工作状态。随着汽车座舱智能化的不断发展,汽车上使用的屏幕越来越多,功耗也变得越来越大。
目前,汽车屏幕控制通常采用常亮模式,即在车辆启动后,无论座椅上是否有用户落座,屏幕一直处于工作状态,长此以往,汽车的功耗也变得越来越大。有鉴于此,本公开实施例提出,可以通过实时监测屏幕前方即座椅上是否有用户落座来实现屏幕的智能控制,只有当监测到屏幕前方即座椅上有用户落座时才开启屏幕,如此可以减少汽车功耗,从而达到节省能源的目的。本公开实施例进一步提出,具体的可以通过周期性地检测屏幕与预设区域内物体之间的深度信息来判断屏幕前方即座椅上是否有用户落座。
相应的,如图1所示,本公开实施例提供一种屏幕控制方法,该方法可以包括如下步骤S11至步骤S14。
在步骤S11中,检测屏幕与预设区域内物体之间的深度信息。
在步骤S12中,根据检测到的深度信息判断预设区域内是否存在人体。
其中,在判断出预设区域内存在人体时,执行步骤S13;在判断出预设区域内不存在人体时,执行步骤S14。
在步骤S13中,在判断出预设区域内存在人体的情况下,控制屏幕的开关状态为开启状态。
在步骤S14中,在判断出预设区域内不存在人体的情况下,控制屏幕的开关状态为关闭状态。
其中,预设区域可以为屏幕前方包括座椅在内的区域;深度信息即距离信息,英文名称为Depth,后文在提到Depth时即指深度信息。在一示例性的实施方式中,预设区域内存在人体时,可以表明座椅上有用户落座;预设区域内不存在人体时,则表明座椅上还没有用户落座或者此前落座的用户已经离开。
在一示例性的实施方式中,控制屏幕的开关状态为开启状态,可以包括如下操作。例如,在步骤S13之前,若屏幕的开关状态已经为开启状态,则无需改变屏幕的开关状态,继续保持该屏幕处于开启状态;若屏幕的开关状态为关闭状态,则步骤S13可以通过生成控制屏幕开启的第一控制信号的方式进行,屏幕将根据第一控制信号的指示开启显示,供用户查看菜单、点击操作等等。
在一示例性的实施方式中,控制屏幕的开关状态为关闭状态,可以包括如下操作。在步骤S13之前,若屏幕的开关状态已经为关闭状态,则无需改变屏幕的开关状态,继续保持该屏幕处于开启状态;若屏幕的开关状态为开启状态,则步骤S13可以通过生成控制屏幕关闭的第二控制信号的方式进行,在判断出预设区域内已不存在人体时,生成控制屏幕关闭的第二控制信号,及时将屏幕关闭,能够进一步地起到减少汽车功耗的效果。
在一示例性的实施方式中,步骤S11可以在预设条件下检测屏幕与预设区域内物体之间的深度信息;也可以实时或定期检测。
在一示例性的实施方式中,步骤S12可以为:根据检测到的深度信息判断预设区域内在预设时长内是否存在人体。例如,通过比较两段相邻的预设时长所检测到的深度信息来判断预设区域内在后一段预设时长内是否存在人体。
在一些实施例中,步骤S11可以通过飞行时间(Time of flight,TOF)传感器来执行的。TOF传感器能够先向前方发射激光,再接收前方物体反射回来的激光,最后根据发射激光与接收反射激光之间的时间差计算出飞行距离,从而得到深度信息。在一示例性实施例中,TOF传感器将平面划分为多个区域,比如4*4个区域、8*8个区域,向前方发射激光并前方物体反射回来的激光,实现不同区域内物体与TOF传感器之间的距离测量,通常应用在手机等终端设备的辅助对焦上。本公开实施例通过在汽车屏幕上安装TOF传感器来实时检测屏幕与屏幕前方物体之间的深度信息,从而判断是否座椅上是否有用户落座。在汽车、车载多媒体系统和TOF传感器相继启动之后,TOF传感器开始实时监测深度信息。当检测到座椅上有用户落座时可以自动开启屏幕显示,当检测到用户离开时(下车)时则自动关闭屏幕显示,既能 够起到降低功耗的效果又能够实现人车之间良好的交互体验。
从上述步骤S11-S14可以看出,本公开实施例提供的屏幕控制方法,通过检测屏幕与预设区域内物体之间的深度信息,根据检测到的深度信息判断预设区域内是否存在人体;只有在判断出预设区域内存在人体的情况下,才控制屏幕的开关状态为开启状态,只有在判断出预设区域内不存在人体的情况下,才控制屏幕的开关状态为关闭状态,既能够有效地减少汽车功耗达到节省能源的目的,又能够提供良好的人车交互体验。
在一些实施例中,多次执行检测屏幕与预设区域内物体之间的深度信息的步骤,如图2所示,根据检测到的深度信息判断预设区域内是否存在人体(即步骤S12)可以包括如下步骤S21至S22。
在步骤S21中,根据上一次所检测到的深度信息确定当前所检测到的深度信息是否发生变化。
在步骤S22中,在确定出当前所检测到的深度信息发生变化的情况下,根据当前所检测到的深度信息判断预设区域内是否存在人体。
其中,每次执行检测屏幕与预设区域内物体之间的深度信息的步骤,可以是在预设周期到达时执行的,也可以是在一些预设条件满足的情况下执行的,例如,预设条件可以是:车门开启、车辆启动、座椅角度发生变化等等。
多次检测屏幕与预设区域内物体之间的深度信息,一旦发现到当前所检测到的深度信息相较于上一次所检测到的深度信息而言发生了变化,就可以进一步根据当前所检测到的深度信息对预设区域进行人体识别。
其中,根据上一次所检测到的深度信息确定当前所检测到的深度信息是否发生变化,即将上一次所检测到的深度信息与当前所检测到的深度信息进行对比,当对比得到的结果为不一致时,说明当前所检测到的深度信息发生变化。
应当理解,在确定出当前所检测到的深度信息未发生变化的情况下,则基本可以判断出预设区域内不存在人体。如前所述,TOF传感器将平面划分为多个区域,分别检测与各个区域内物体之间的距离。相应的,在一些实施例中,每一次所检测到的深度信息均与各子预设区域一一对应,如图3所示,检测屏幕与预设区域内物体之间的深度信息(即步骤S11)可以包括如下步骤S31至S32。
在步骤S31中,将预设区域平均划分为各子预设区域。
在步骤S32中,检测屏幕与每一子预设区域内物体之间的深度信息。
将上一次所检测到的深度信息与当前所检测到的深度信息,按照子预设区域逐个进行对比,确定出深度信息差值过大的子预设区域的数量,当这种深度信息差值过大的子预设区域的数量也达到一定阈值时,可以认为当前所检测到的深度信息发生变化。
相应的,在一些实施例中,如图4所示,根据上一次所检测到的深度信息确定当前所检测到的深度信息是否发生变化(即步骤S21)可以包括如下步骤S41至S43。
在步骤S41中,计算当前所检测到的每一深度信息与上一次所检测到的相应深度信息之间的第一深度信息差值绝对值。
在步骤S42中,计算大于第一预设差值阈值的第一深度信息差值绝对值的数量。
在步骤S43中,在数量大于预设数量阈值的情况下,确定当前所检测到的深度信息发生变化。
其中,“上一次所检测到的相应深度信息”即指与“当前所检测到的每一深度信息”对应于同一子预设区域的深度信息。深度信息差值绝对值,即指取两个深度信息的差值的绝对值。第一预设差值阈值可以表示为Dt,预先通过采集数据进行训练得到。本公开实施例对预设数量阈值也没有具体限制,只要该预设数量阈值大于2均可。
以按照预设周期来多次执行检测屏幕与预设区域内物体之间的深度信息的步骤为例,TOF传感器是按照一定的帧率工作的,通常是15帧/s或者30帧/s的速度(相当于周期间隔为3/200ms或者3/100ms),并依次记录每个周期根据获取的图像帧所检测到的Depth。以TOF传感器将预设区域划分为8*8个子预设区域为例,可以将上一周期内所检测到的Depth记录为{Dp1,Dp2,……Dpm},再将当前周期内所检测到的Depth记录为{Dn1,Dn2,……Dnm},其中,m表示TOF的点数(即子预设区域的总数量)也就是64。依次计算每一个子预设区域在上一周期内Depth与在当前周期内Depth的差值的绝对值:δ1=|Dp1-Dn1|、δ2=|Dp2-Dn2|……δm=|Dpm-Dnm|,得到{δ1,δ2,……δm}。当{δ1,δ2,……δm}中有多个δ均大于第一预设差值阈值Dt时,可以认为当前周期内所检测到的Depth发生了变化。
在确定出当前所检测到的Depth发生了变化时,可以进一步根据当前所检测到的深度信息对预设区域进行人体识别。相应的,在一些实施例中,如图5所示,根据当前所检测到的深度信息判断预设区域内是否存在人体(即步骤S32中所述)可以包括如下步骤:
在步骤S51中,根据当前所检测到的深度信息确定出每一子预设区域对应的第二深度信息差值绝对值;
在步骤S52中,在任一所述第二深度信息差值绝对值大于第二预设差值阈值的情况下, 根据第二深度信息差值绝对值大于第二预设差值阈值的子预设区域判断预设区域内是否存在人体。
其中,第二深度信息差值绝对值用以表征子预设区域的Depth与相邻子预设区域的Depth之间的差值,当两个子预设区域的Depth差别较大时,需要提取出与相邻子预设区域相差较大的这种子预设区域,再根据这种子预设区域做进一步的判断。
相应的,在一些实施例中,如图6所示,根据第二深度信息差值绝对值大于第二预设差值阈值的子预设区域判断预设区域内是否存在人体(即步骤S52中所述)可以包括如下步骤S61至S62。
在步骤S61中,根据第二深度信息差值绝对值大于第二预设差值阈值的子预设区域在预设方向上的数量,确定出疑似头肩比值。
在步骤S62中,在疑似头肩比值处于预设的头肩比值区间的情况下,判断出预设区域内存在人体。
其中,预设方向可以为横向,当然,也可以为纵向,由于将预设区域平均划分为多个子预设区域,多个子预设区域不仅在预设方向上排列,还在垂直于预设方向的方向上排列,第二深度信息差值绝对值大于第二预设差值阈值的子预设区域在预设方向上的数量为多个。
应当理解,在疑似头肩比值并不处于预设的头肩比值区间的情况下,则可以确定预设区域内不存在人体。
需要首先根据第二深度信息差值绝对值大于第二预设差值阈值的子预设区域在预设方向上的数量,确定出疑似人体的头部的宽度以及疑似人体的肩部的宽度,再根据疑似头部宽度和疑似肩部宽度确定出疑似头肩比值。
相应的,在一些实施例中,如图7所示,根据第二深度信息差值绝对值大于第二预设差值阈值的子预设区域在预设方向上的数量,确定出疑似头肩比值(即步骤S61)可以包括如下步骤S71至S72。
在步骤S71中,根据第二深度信息差值绝对值大于第二预设差值阈值的子预设区域在预设方向上的数量,确定出疑似头部宽度和疑似肩部宽度;
在步骤S72中,根据疑似头部宽度和疑似肩部宽度确定出疑似头肩比值。
在一些实施例中,如图8所示,根据当前所检测到的深度信息确定出每一子预设区域对应的第二深度信息差值绝对值(即步骤S51)可以包括如下步骤S81至S82。
在步骤S81中,根据当前所检测到的深度信息,确定每一子预设区域与该子预设区域所 相邻的其他子预设区域之间的第三深度信息差值绝对值;
在步骤S82中,确定出每一子预设区域对应的各第三深度信息差值绝对值中的最大值,作为该子预设区域对应的第二深度信息差值绝对值。
由于将预设区域平均划分为多个子预设区域,多个子预设区域不仅在预设方向上排列,还在垂直于预设方向的方向上排列,每个子预设区域均有3个或4个相邻的子预设区域,对于每一子预设区域,均确定出与各个相邻子预设区域之间的Depth差值绝对值,这种类型的Depth差值绝对值称为第三深度信息差值绝对值,数量为3个或4个,然后从这3个或4个第三深度信息差值绝对值中取最大值,作为该子预设区域的第二深度信息差值绝对值。
如图9所示,为本公开实施例提供的一种8*8多点TOF传感器的Depth示意图,可见,图中共计8*8=64个点,也即64个子预设区域,每个子预设区域均有3个或4个相邻的子预设区域。从左上角第一个子预设区域开始,依次从左往右、从上到下逐行地遍历每一个子预设区域,计算出每一个子预设区域的第二深度信息差值绝对值。例如,与第一个子预设区域D1相邻的子预设区域共有3个,按遍历顺序可标识为D2、D9和D10。分别计算D2、D9和D10与D1之间的Depth差值绝对值,得到3个第三深度信息差值绝对值,取其中的最大值Δ1作为第一个子预设区域D1的第二深度信息差值绝对值,得到64个子预设区域的第二深度信息差值绝对值:{Δ1,Δ2,……Δ64}。预先通过采集数据进行训练得到当用户离开座椅时的Depth差值最大值Δn,称为第二预设差值阈值。当{Δ1,Δ2,……Δ64}中存在Δ大于Δn时,说明预设区域内的物体疑似为人体,此时进一步地遍历出{Δ1,Δ2,……Δ64}中所有大于Δn的Δ,对这些大于Δn的Δ所对应的子预设区域进行标记,生成标记过的子预设区域的轮廓图形,对轮廓图形进行逐行计算,计算这些子预设区域在横向上的数量也即宽度,譬如,假设D11、D12、D18-D21、D26-D29、D34-D37、D41-D46、D49-D54、D57-D63均为第二深度信息差值绝对值大于Δn的子预设区域,那么这些子预设区域在横向上的数量为2、4、4、4、6、6、7。
在一示例性实施例中,根据这些子预设区域在横向上的数量,就可以计算出疑似头部宽度和疑似肩部宽度。在一示例性实施例中,可以先计算出轮廓图形的宽度平均值:(2+4+4+4+6+6+7)/7=33/7,再根据小于该宽度平均值的宽度确定疑似头部宽度:(2+4+4+4)/4=7/2,根据大于该宽度平均值的宽度确定疑似肩部宽度:(6+6+7)/3=19/3。最后,根据疑似头部宽度和疑似肩部宽度计算出疑似头肩比值:(7/2)/(19/3)=21/38。需要注意的是,此处仅做示例性说明,采用其他方式确定疑似头部宽度和疑似肩部宽度也是可行的,例如,直接将最小的宽度作为疑似头部宽度而将最大的宽度作为疑似肩部宽度,即疑似头部宽度和疑似肩部宽度分别为2、7。
计算出疑似头肩比值之后,若该疑似头肩比值R落入预先通过采集大量数据训练出的头肩比值区间,则可以确定预设区域内存在人体。在一示例性实施例中,人体的头部与肩部是非等宽的,整体呈现出头部宽度较窄、肩部宽度较宽的趋势,而其他物体或宠物则没有这种趋势,宠物头部的宽度和肩部的宽度趋近于一致,假设预设的头肩比值区间为[Rmin,Rmax],当R满足Rmin=<R<=Rmax时,表明预设区域内的物体确实呈现出头部宽度较窄、肩部宽度较宽的趋势并且头部宽度和肩部宽度的比值与人体头肩比相近,可以说明预设区域内确实存在人体。反之,当R不满足Rmin=<R<=Rmax时,预设区域内的物体其头部宽度和肩部宽度可能基本一致,或者未呈现出头部宽度较窄、肩部宽度较宽的趋势,说明可能是物体或者宠物落座。
在一些实施例中,根据当前所检测到的深度信息判断预设区域内是否存在人体还可以包括如下步骤:在任一第二深度信息差值绝对值均不大于第二预设差值阈值的情况下,确定预设区域内不存在人体。
即,在步骤S71中根据当前所检测到的深度信息确定出每一子预设区域对应的第二深度信息差值绝对值之后,若任一第二深度信息差值绝对值均不大于第二预设差值阈值,则可以说明虽然当前周期内所检测到的Depth发生了变化,但预设区域内的物体并非人体,可能是物体或宠物落座。
如图10所示,为本公开实施例提供的一种屏幕控制工作流程图,可以包括如下步骤:启动车辆;启动多媒体系统;启动TOF传感器;TOF传感器周期性检测Depth;判断当前周期内所检测到的Depth是否发生变化,若是,则判断预设区域内是否存在人体,否则继续检测Depth;若判断出预设区域内存在人体且屏幕当前处于关闭状态,则开启屏幕,随后继续检测Depth;若判断出预设区域内不存在人体且屏幕当前处于开启状态,则关闭屏幕,随后继续检测Depth。
此外,本公开实施例还提供一种车载设备,可以包括:一个或多个处理器;存储装置,其上存储有一个或多个程序;当一个或多个程序被一个或多个处理器执行时,使得一个或多个处理器实现如前所述的屏幕控制方法。
此外,本公开实施例还提供一种计算机存储介质,其上存储有计算机程序,其中,程序被执行时实现如前所述的屏幕控制方法。
此外,本公开实施例还提供一种车辆,车辆包括车载设备,车载设备为如前所述的车载设备。
在一些实施例中,车辆还包括飞行时间TOF传感器,TOF传感器用于发出光信号,并接 收光信号被预设区域内物体反射的反射信号;处理器用于根据发出光信号和接收到反射信号之间的时间间隔确定屏幕与预设区域内物体之间的深度信息。
通过检测屏幕与预设区域内物体之间的深度信息,根据检测到的深度信息判断预设区域内是否存在人体;只有在判断出预设区域内存在人体的情况下,才控制屏幕的开关状态为开启状态,只有在判断出预设区域内不存在人体的情况下,才控制屏幕的开关状态为关闭状态,既能够有效地减少汽车功耗达到节省能源的目的,又能够提供良好的人车交互体验。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施例相结合描述的特征、特性和/或元素,或可与其他实施例相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (13)

  1. 一种屏幕控制方法,包括:
    检测屏幕与预设区域内物体之间的深度信息;
    根据检测到的深度信息判断所述预设区域内是否存在人体;
    在判断出所述预设区域内存在人体的情况下,控制所述屏幕的开关状态为开启状态;或
    在判断出所述预设区域内不存在人体的情况下,控制所述屏幕的开关状态为关闭状态。
  2. 根据权利要求1所述的方法,其中,多次执行所述检测屏幕与预设区域内物体之间的深度信息的步骤,所述根据检测到的深度信息判断所述预设区域内是否存在人体包括:
    根据上一次所检测到的深度信息确定当前所检测到的深度信息是否发生变化;
    在确定出当前所检测到的深度信息发生变化的情况下,根据当前所检测到的深度信息判断所述预设区域内是否存在人体。
  3. 根据权利要求2所述的方法,其中,每一次所检测到的深度信息均与各子预设区域一一对应,所述检测屏幕与预设区域内物体之间的深度信息包括:
    将所述预设区域平均划分为所述各子预设区域;
    检测屏幕与每一所述子预设区域内物体之间的深度信息。
  4. 根据权利要求3所述的方法,其中,所述根据上一次所检测到的深度信息确定当前所检测到的深度信息是否发生变化包括:
    计算当前所检测到的每一深度信息与上一次所检测到的相应深度信息之间的第一深度信息差值绝对值;
    计算大于第一预设差值阈值的第一深度信息差值绝对值的数量;
    在所述数量大于预设数量阈值的情况下,确定当前所检测到的深度信息发生变化。
  5. 根据权利要求3所述的方法,其中,所述根据当前所检测到的深度信息判断所述预设区域内是否存在人体包括:
    根据当前所检测到的深度信息确定出每一所述子预设区域对应的第二深度信息差值绝对值;
    在任一所述第二深度信息差值绝对值大于第二预设差值阈值的情况下,根据第二深度信息差值绝对值大于所述第二预设差值阈值的所述子预设区域判断所述预设区域内是否存在人体。
  6. 根据权利要求5所述的方法,其中,所述根据第二深度信息差值绝对值大于所述第二预设差值阈值的所述子预设区域判断所述预设区域内是否存在人体包括:
    根据第二深度信息差值绝对值大于所述第二预设差值阈值的所述子预设区域在预设方向上的数量,确定出疑似头肩比值;
    在所述疑似头肩比值处于预设的头肩比值区间的情况下,判读出所述预设区域内存在人体。
  7. 根据权利要求6所述的方法,其中,所述根据第二深度信息差值绝对值大于所述第二预设差值阈值的所述子预设区域在预设方向上的数量,确定出疑似头肩比值包括:
    根据第二深度信息差值绝对值大于所述第二预设差值阈值的所述子预设区域在预设方向上的数量,确定出疑似头部宽度和疑似肩部宽度;
    根据所述疑似头部宽度和所述疑似肩部宽度确定出所述疑似头肩比值。
  8. 根据权利要求5所述的方法,其中,所述根据当前所检测到的深度信息确定出每一所述子预设区域对应的第二深度信息差值绝对值包括:
    根据当前所检测到的深度信息,确定每一所述子预设区域与该子预设区域所相邻的其他子预设区域之间的第三深度信息差值绝对值;
    确定出每一所述子预设区域对应的各第三深度信息差值绝对值中的最大值,作为该子预设区域对应的第二深度信息差值绝对值。
  9. 根据权利要求5所述的方法,其中,所述根据当前所检测到的深度信息判断所述预设区域内是否存在人体还包括:
    在任一所述第二深度信息差值绝对值均不大于第二预设差值阈值的情况下,判断出所述预设区域内不存在人体。
  10. 一种车载设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1-9任一项所述的屏幕控制方法。
  11. 一种计算机存储介质,其上存储有计算机程序,其中,所述程序被执行时实现如权利要求1-9任一项所述的屏幕控制方法。
  12. 一种车辆,所述车辆包括车载设备,所述车载设备为权利要求10所述的车载设备。
  13. 根据权利要求12所述的车辆,其中,所述车辆还包括飞行时间TOF传感器,所述TOF传感器用于发出光信号,并接收所述光信号被所述预设区域内物体反射的反射信号;
    所述处理器用于根据发出所述光信号和接收到所述反射信号之间的时间间隔确定所述屏幕与预设区域内物体之间的深度信息。
PCT/CN2023/099280 2022-08-12 2023-06-09 屏幕控制方法、车载设备、计算机存储介质及车辆 WO2024032131A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210966460.4 2022-08-12
CN202210966460.4A CN117666982A (zh) 2022-08-12 2022-08-12 屏幕控制方法、车载设备、计算机存储介质及车辆

Publications (1)

Publication Number Publication Date
WO2024032131A1 true WO2024032131A1 (zh) 2024-02-15

Family

ID=89850595

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/099280 WO2024032131A1 (zh) 2022-08-12 2023-06-09 屏幕控制方法、车载设备、计算机存储介质及车辆

Country Status (2)

Country Link
CN (1) CN117666982A (zh)
WO (1) WO2024032131A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113119877A (zh) * 2021-04-16 2021-07-16 恒大恒驰新能源汽车研究院(上海)有限公司 车载屏幕控制系统及车载屏幕控制方法
CN113268273A (zh) * 2021-04-21 2021-08-17 智马达汽车有限公司 一种车辆多媒体显示方法、系统及车载终端
CN113665511A (zh) * 2021-08-13 2021-11-19 恒大恒驰新能源汽车研究院(上海)有限公司 一种车辆控制方法、装置及计算机可读存储介质
CN215284684U (zh) * 2021-04-12 2021-12-24 广州汽车集团股份有限公司 座舱娱乐系统及车辆
CN114882579A (zh) * 2021-01-21 2022-08-09 奥迪股份公司 车载屏幕的控制方法、装置及车辆

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882579A (zh) * 2021-01-21 2022-08-09 奥迪股份公司 车载屏幕的控制方法、装置及车辆
CN215284684U (zh) * 2021-04-12 2021-12-24 广州汽车集团股份有限公司 座舱娱乐系统及车辆
CN113119877A (zh) * 2021-04-16 2021-07-16 恒大恒驰新能源汽车研究院(上海)有限公司 车载屏幕控制系统及车载屏幕控制方法
CN113268273A (zh) * 2021-04-21 2021-08-17 智马达汽车有限公司 一种车辆多媒体显示方法、系统及车载终端
CN113665511A (zh) * 2021-08-13 2021-11-19 恒大恒驰新能源汽车研究院(上海)有限公司 一种车辆控制方法、装置及计算机可读存储介质

Also Published As

Publication number Publication date
CN117666982A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
US10853673B2 (en) Brake light detection
US11667285B2 (en) Vehicle control method and apparatus, electronic device and storage medium
US11027723B2 (en) Parking support method and parking support device
US20190122067A1 (en) Object Detection Using Recurrent Neural Network And Concatenated Feature Map
US9409517B2 (en) Biologically controlled vehicle and method of controlling the same
US9910157B2 (en) Vehicle and lane detection method for the vehicle
JP4987573B2 (ja) 車外監視装置
US10642266B2 (en) Safe warning system for automatic driving takeover and safe warning method thereof
US9975558B2 (en) Control system and control method for selecting and tracking a motor vehicle
CA3002634A1 (en) Parking support information display method and parking support device
JP4466299B2 (ja) 車両用警報装置、車両用警報方法及び車両用警報発生プログラム
CN111267862B (zh) 一种依赖跟随目标的虚拟车道线构造方法和系统
US11934204B2 (en) Autonomous driving apparatus and method
KR102635090B1 (ko) 자동차의 카메라 피치를 캘리브레이션하는 방법 및 장치, 그리고, 이를 위한 소멸점 추정 모델을 컨티뉴얼 러닝시키는 방법
KR102303234B1 (ko) 운전자 적응 주차지원장치 및 운전자 적응 주차지원방법
JP2016192177A (ja) 車両検出システム、車両検出装置、車両検出方法、及び車両検出プログラム
US20150183465A1 (en) Vehicle assistance device and method
WO2024032131A1 (zh) 屏幕控制方法、车载设备、计算机存储介质及车辆
US10708504B2 (en) Vehicle camera system
CN113352988A (zh) 智能行车安全辅助方法、装置、设备、程序产品及介质
JP2008105511A (ja) 運転支援装置
CN116612638A (zh) 交通碰撞事故检测方法、装置及可读介质
WO2023230740A1 (zh) 一种异常驾驶行为识别的方法、装置和交通工具
WO2022012217A1 (zh) 一种自适应巡航控制方法及装置
US11472481B2 (en) Apparatus and method for controlling vehicle by determining distortion in lane recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23851359

Country of ref document: EP

Kind code of ref document: A1