CN116848562A - Electronic device, method and computer program - Google Patents

Electronic device, method and computer program Download PDF

Info

Publication number
CN116848562A
CN116848562A CN202280010472.8A CN202280010472A CN116848562A CN 116848562 A CN116848562 A CN 116848562A CN 202280010472 A CN202280010472 A CN 202280010472A CN 116848562 A CN116848562 A CN 116848562A
Authority
CN
China
Prior art keywords
hand
score
arm
driver
owner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280010472.8A
Other languages
Chinese (zh)
Inventor
瓦伦·阿罗拉
达米安·埃纳尔
胡安·卡洛斯·托西诺·迪亚兹
大卫·达尔·佐特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Semiconductor Solutions Corp
Original Assignee
Sony Semiconductor Solutions Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corp filed Critical Sony Semiconductor Solutions Corp
Publication of CN116848562A publication Critical patent/CN116848562A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • B60K35/10
    • B60K35/28
    • B60K35/65
    • B60K35/652
    • B60K35/654
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/593Recognising seat occupancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • B60K2360/1464
    • B60K2360/176
    • B60K2360/741

Abstract

An electronic device has circuitry configured to perform a homeowner identification based on image analysis of images captured by an imaging system (200) to obtain a homeowner status.

Description

Electronic device, method and computer program
Technical Field
The present disclosure relates generally to the field of automotive user interfaces, and in particular, to devices, methods, and computer programs for automotive user interfaces.
Background
Automotive user interfaces for vehicle systems relate to control, driving functions, comfort functions (e.g., navigation, communication, entertainment) and driver assistance (e.g., distance checking) of vehicle electronics.
Recent automobiles incorporate interactive screens (touch screens) that gradually replace the traditional cockpit. Typically, buttons or interactions are directly operated by a user of the automotive system, and the automotive system outputs feedback as predefined behavior.
The next generation of vehicle user interfaces also rely on gesture recognition technology. Gesture recognition determines whether a recognizable hand or finger gesture is performed without contacting the touch screen.
While automotive user interfaces that rely on touch screen technology and gesture recognition technology are known, it is generally desirable to provide better techniques for controlling vehicle functions.
Disclosure of Invention
According to a first aspect, the present disclosure provides an electronic device comprising circuitry configured to perform a homeowner identification based on an image analysis of an image captured by an imaging system to obtain a homeowner status.
According to a second aspect, the present disclosure provides a method comprising performing a homeowner identification based on an image analysis of an image captured by an imaging system to obtain a homeowner status.
According to a third aspect, the present disclosure provides a computer program comprising instructions which, when executed by a computer, cause the computer to perform a hand-owner identification based on an image analysis of an image captured by an imaging system to obtain a hand-owner status.
Further aspects are set out in the dependent claims, the following description and the accompanying drawings.
Drawings
Embodiments are described by way of example with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an embodiment of an interactive car feedback system for recognizing a gesture of a user and performing a corresponding action based on the recognized gesture;
FIG. 2 schematically illustrates an embodiment of an in-vehicle imaging system including a TOF imaging system for identifying a homeowner in an in-vehicle scene;
FIG. 3 schematically illustrates an embodiment of a process for adjusting the output behavior of an automotive system based on operations performed by a user and based on the user;
FIG. 4 schematically illustrates an embodiment of a process for vehicle operation mode selection based on a state of a homeowner;
FIG. 5 schematically illustrates an embodiment of an iToF imaging system in an in-vehicle scene, wherein images captured by the iToF imaging system are used for hander identification;
fig. 6a shows in more detail an example of a depth image obtained by an in-vehicle ToF imaging system for vehicle seat occupancy detection, wherein the depth image shows that the seat of the passenger is occupied;
fig. 6b shows an example of a depth image obtained by an in-vehicle ToF imaging system for vehicle seat occupancy detection in more detail, wherein the depth image shows that the driver's seat is occupied;
FIG. 7 illustrates a depth image generated by a ToF imaging system capturing a scene in a vehicle-mounted scene, wherein an active hand is detected in the depth image;
FIG. 8 schematically depicts an embodiment of a hand-owner identification process;
fig. 9a schematically illustrates an embodiment of an image analysis process performed on an image captured by an in-vehicle ToF imaging system;
FIG. 9b shows the lower arm analysis results, wherein the position of the bottom of the arm is determined;
fig. 10 schematically shows an embodiment of an arm angle determination process performed to obtain an arm angle;
FIG. 11a schematically illustrates an embodiment of a hand-owner status determination in more detail, wherein the hand-owner status indicates that the detected hand belongs to the driver;
FIG. 11b schematically illustrates an embodiment of a hand-owner status determination in more detail, wherein the hand-owner status indicates that the detected hand belongs to a front seat passenger;
FIG. 12a schematically illustrates an embodiment of a tip analysis process based on tip criteria to obtain tip scores;
fig. 12b schematically shows an embodiment of a finger gesture detection result obtained based on the detected tip position and palm position;
FIG. 13 schematically illustrates an embodiment of an arm analysis process to obtain hand parameters;
FIG. 14a schematically illustrates an embodiment of an arm voting process performed based on hand parameters to obtain an arm voting result;
fig. 14b schematically shows an embodiment of the arm voting performed in fig. 12 a;
fig. 14c schematically illustrates another embodiment of arm voting performed in fig. 12 a;
FIG. 15a shows an embodiment of the arm voting process described with respect to FIG. 14a in more detail, wherein the arm voting results are attributed to the driver;
fig. 15b shows an embodiment of the arm voting process described in relation to fig. 14a in more detail, wherein the arm voting results are attributed to the passengers;
fig. 16 schematically shows an embodiment of the score determination process in which the score of the driver and the score of the passenger are calculated;
FIG. 17 illustrates a flow chart of a method of visualizing a state of a hand owner for determining an identified active hand, wherein a calculated score of a driver and a score of a passenger are compared;
FIG. 18 illustrates a flow chart of a method for visualizing the status of a user for generating a user's hand in a captured image for an identified active hand;
FIG. 19 shows a flow chart of a method of visualizing state recognition for a hander, wherein historical statistics of the hander are calculated and arm voting results and right-hand driving (RHD) switching are performed;
FIG. 20 illustrates a flow chart of an embodiment of a method of visualizing a state identification for a homeowner;
FIG. 21 shows a block diagram depicting an example of a schematic configuration of a vehicle control system;
FIG. 22 schematically illustrates an embodiment of a hand-owner detection process performed to adjust car system behavior based on an input user;
FIG. 23 illustrates an embodiment of a separation line defined in a captured image in more detail; and
fig. 24 schematically shows a hand-owner detection result in which the hand-owner status is set as the driver while the hand-owner interacts with the in-vehicle infotainment system.
Detailed Description
Before a detailed description of the embodiments is given with reference to fig. 1 to 24, some general description is made.
Automotive systems are becoming more and more intelligent. In the embodiments described below, information from a user's hand interacting with an entertainment system or an automobile driving system may be used to tailor cockpit content to a given user.
An embodiment discloses an electronic device comprising circuitry configured to perform a homeowner identification based on an image analysis of an image captured by an imaging system to obtain a homeowner status.
In an in-vehicle scenario or the like, the handowner identification may be performed in the cabin of the vehicle.
The circuitry of the electronic device may comprise a processor, which may be, for example, a CPU, memory (RAM, ROM, etc.), memory and/or storage, an interface, etc. The circuitry may include or may be connected to input means (mouse, keyboard, camera, etc.), output means (display (e.g. liquid crystal, (organic) light emitting diode, etc.), (wireless) interfaces, etc., as is generally known for electronic devices (computers, smartphones, etc.). Further, the circuit may include or be connected to a sensor (image sensor, camera sensor, video sensor, or the like) or the like for sensing still image or video image data. In particular, the circuitry of the electronic device may include a ToF imaging system (iToF camera).
In an in-vehicle scene, the ToF imaging system may illuminate its field of view and objects therein, such as a driver's hand, a passenger's leg, a driver's leg, a console, an infotainment system, etc. During the hand-owner identification detection process, a ToF imaging system including a ToF sensor may detect interactions of a driver and/or passenger with an infotainment system of a car or the like. Further, in such a hander recognition detection process, the driver and the front seat passenger can be recognized individually.
The user's hand is typically detected as an active hand interacting with an entertainment system or an automotive driving system in the cabin of a vehicle, for example.
In an in-vehicle scenario, the circuitry may detect occupant input actions and obtain occupant information, which may include owner information, based on which owner status may be generated. The hand owner status may be any status information, for example, indicating that the detected hand belongs to the driver, to the (front seat) passenger, or does not know who the hand belongs to, etc. The car system may use the owner status to adjust the output of the car cabin or allow or disallow certain functions. When the driver is not allowed, the car system may use the owner status to allow, for example, passengers to interact with the car system including the infotainment system, and allow the driver to adjust the configuration of the car, etc., that the passengers may not have access to.
The image captured by the imaging system may be a depth image, a confidence image, or the like. The imaging system may be any imaging system comprising at least one camera, wherein the camera may be a depth camera system, a red-green-blue (RGB) camera, a time-of-flight (ToF) camera, combinations thereof, or the like. The in-cabin monitoring depth camera system may be fixed to the ceiling of the car and it may be oriented in the cabin with the field of view facing downward. Preferably, the field of view is configured to be wide enough to include the driver, the center console area, and the passengers.
The hand owner identification process may for example combine several criteria calculated by software, such as camera orientation criteria, palm position criteria, palm trajectory criteria, arm angle and arm position criteria, fingertip analysis criteria, etc. Hand movements and hand owner history may also be monitored. In the hander identification, score calculation and voting result calculation may be performed, which may be used to identify the hander and generate the hander status by reducing false detection. The hander identification may be performed in daylight, dim light, and night conditions.
The circuitry may be configured to define a driver steering wheel region as a region of interest (ROI) in the captured image, and perform a hand-owner identification based on the defined driver steering wheel region. The driver steering wheel zone may be an area comprising at least a portion of a steering wheel of the vehicle. As it maps to the driver steering wheel region in real space, the driver steering wheel region corresponds to a region of interest (ROI) in the captured image.
The circuitry may be configured to detect a live hand in a captured image captured as a field of view of an imaging system of the ToF imaging system, and perform a hand-owner identification based on the detected live hand. The active hand may be the driver's hand or the passenger's hand. The active hand may be a hand that interacts with an automotive system that includes the infotainment system. The active hand may be segmented and tracked using a dedicated pipeline. The active hand may be segmented and tracked by defining a bounding box in the captured image, an ROI in the captured image, detecting a two-dimensional (2D)/three-dimensional (3D) position of the active hand in the captured image, and the like.
The circuitry may be configured to define a minimum number of frames in which an active hand should be detected in the driver steering wheel zone. The minimum number of frames may be a predefined number. The minimum number of frames may be any integer suitably selected by a person skilled in the art. The active hand may be at least partially detected in the driver steering wheel zone.
The circuit may be configured to count a number of frames in which an active hand is detected in the driver steering wheel zone, and perform the hand-owner identification by comparing the minimum number of frames with the counted number of frames. The number of frames in which an active hand is detected in the driver's steering wheel region among the frames of the frame number may be any integer. The frame in which the active hand is detected in the driver's steering wheel region may or may not be a continuous frame. The active hand may be at least partially detected in the driver steering wheel zone.
The circuitry may be configured to obtain a hand owner status indicating that the hand owner is the driver when the minimum number of frames is less than the counted number of frames.
The circuitry may be configured to perform image analysis based on the captured image to obtain a tip position, a palm position, and an arm position indicative of a lower arm position. The image analysis performed on the captured image may include pixel segmentation (2D or 3D) to extract, for example, fingertip position, finger direction (which may be obtained by applying principal component analysis to a 3D point cloud), palm position (which may be obtained by calculating a center of gravity estimate of a 2D palm), palm orientation (which may be obtained by applying principal component analysis to a segmented palm), arm orientation (which may be calculated from finger direction), lower region, lower arm, finger pose, and the like.
The circuitry may be configured to perform an arm angle determination based on the palm position and the lower arm position to obtain an arm angle. The arm angle may include information about the arm orientation, etc.
The circuitry may be configured to perform a fingertip analysis based on the tip position to obtain a tip score. Fingertip analysis may include detecting one finger (1F) gesture or two finger (2F) gestures by locating the detected position of the fingertip relative to the palm center. This may give information about the owner of the hand. Based on the tip and palm positions, a tip-palm direction may be determined. A specific range of tip-palm directions may be predefined for each of the passenger's hand and the driver's hand. The tip score may be a score indicating whether the detected tip is a fingertip of the driver or a fingertip of the passenger.
The circuitry may be configured to perform an arm analysis based on the palm position, the lower arm position, and the arm angle to obtain a palm score, a lower arm score, and an arm angle score. Palm position may be determined using a confidence image or a 2D image. The arm analysis score may be used to distinguish the driver's arm from the passenger's arm. The lower arm position may be used to detect its position into the field of view, e.g., the lower arm position. The arm angle may be an angle defined between a separation line separating the captured image into two parts and the detected hand.
The circuitry may be configured to perform an arm vote based on the palm position, the lower arm position, and the arm angle to obtain an arm vote result. Arm votes may be represented by boolean values. The arm voting results may affect the hander score that defines the hander state. In particular, false positive hand states can be avoided by arm voting.
The circuitry may be configured to perform a score determination based on the arm vote result, the tip score, the palm score, the lower arm score, and the arm angle score to obtain a driver score and a passenger score.
The circuit may be configured to obtain a hand owner status indicating that the hand owner is the driver when the score of the driver is higher than the score of the passenger.
The circuit may be configured to obtain a hand owner status indicating that the hand owner is a passenger when the score of the driver is lower than the score of the passenger.
The circuitry may be configured to obtain a hand owner indicating that the hand owner is unknown when an absolute difference between the score of the driver and the score of the passenger is greater than a threshold. The threshold value may be any value suitably selected by a person skilled in the art.
According to one embodiment, the circuit may be configured to perform seat occupancy detection based on the depth image to obtain a seat occupancy detection state when the captured image is the depth image. The seat occupancy detection may be performed with any seat occupancy method known to a person skilled in the art.
According to one embodiment, the circuitry may be configured to perform the hander identification based on a Left Hand Driving (LHD) configuration or a Right Hand Driving (RHD) configuration.
Embodiments also disclose a method comprising performing a homeowner identification based on an image analysis of an image captured by an imaging system to obtain a homeowner status.
The embodiments also disclose a computer program comprising instructions that, when executed by a computer, cause the computer to perform a hand-owner identification based on an image analysis of an image captured by an imaging system to obtain a hand-owner status.
Embodiments are now described with reference to the drawings.
Interactive automobile feedback system
Fig. 1 schematically illustrates an embodiment of an interactive car feedback system for recognizing a gesture of a user and performing a corresponding action based on the recognized gesture.
In an in-vehicle scenario, gesture recognition 100 recognizes gestures performed by a driver or passenger of a vehicle. This process performed may be performed by an interactive car feedback system (e.g., car system 101). The detected gesture may typically include pressing a button as part of an interactive screen or performing a direct interaction from the driver or passenger to the car system 101. Based on the recognized gesture, the automotive system 101 performs the process of outputting the action 102. For example, the automotive system 101 performs predefined output actions, such as predefined behaviors. In the event that the recognized gesture is pressing a button, the signal from the pressed button may be used to determine an output action, and the recognized gesture may be used to perform a hand owner status determination.
For example, the car system 101 detects operations performed on an infotainment system of the car, such as a multimedia player operation, a navigation system operation, a car configuration adjustment operation, a warning flasher activation operation, etc., and/or operations performed on a console of the car, such as a manual brake operation, etc.
Vehicle-mounted TOF imaging system
Fig. 2 schematically illustrates an embodiment of an in-vehicle imaging system including a ToF imaging system for identifying a homeowner in an in-vehicle scene.
The ToF imaging system 200 actively illuminates its field of view 201 with pulses of light in an in-vehicle scene. The ToF imaging system 200 analyzes the time of flight of the emitted light to obtain images of the field of view 201, such as depth images and confidence images. Based on the obtained image, the processor 202 performs a hand-owner identification to obtain a hand-owner status. Based on the owner status of the hand determined by the processor 202, the infotainment system 203 of the vehicle performs predefined actions. The processor 202 may be implemented as the microcomputer 7610 of fig. 21 below.
In the embodiment of fig. 2, toF imaging system 200 can be an indirect ToF imaging system (iToF) that emits light pulses of infrared light within its field of view 201. Objects included in field of view 201 of ToF imaging system 200 reflect emitted light back to ToF imaging system 200. The ToF imaging system 200 can capture a confidence image and a depth map (e.g., depth image) of the field of view 201 of the vehicle interior by analyzing the time of flight of the emitted infrared light. The objects included in the field of view 201 of the iToF sensor of the ToF imaging system 200 can be the dashboard of the vehicle, the console of the vehicle, the hands of the driver, the hands of the passengers, etc. Alternatively, the ToF imaging system 200 can be a direct ToF imaging system (dtoh imaging system), or an imaging system including an RGB camera and a ToF sensor, any 2D/RGB vision system known to those skilled in the art, or the like.
Fig. 3 schematically shows an embodiment of a process for adjusting the output behavior of an automotive system based on an operation performed by a user, which is a driver or a passenger of a vehicle, and based on a user (e.g., a hand-owner status).
At 204, an operation performed by a user is detected. At 205, if the operation is "manual braking," the process proceeds at 206. If the operation is not "manual braking," the process proceeds at 209. If the owner status is set to "driver" at 206, the vehicle system allows the operation to be performed at 208. If the owner status is not set to "driver" at 206, the vehicle system does not allow the operation to be performed at 207. At 209, if the operation is "multimedia player", the process proceeds at 213. If the operation is not a "multimedia player," the process proceeds at 210. If the hand owner status is set to "driver" and the car is stopped at 210, or if the hand owner status is set to "passenger", then the car system allows the operation to be performed at 212. If the hander status is not set to "driver" and the car is not stopped, or if the hander status is not set to "passenger", then the car system does not allow the operation to be performed at 211. At 213, if the operation is "navigation system", the process proceeds at 214. If the operation is not a "navigation system," the process proceeds at 217. If the owner status is set to "driver" and the car is stopped at 214, or if the owner status is set to "passenger", then the car system allows the operation to be performed at 216. If the owner status is not set to "driver" and the car is not stopped, or if the owner status is not set to "passenger", then at 215 the car system does not allow the operation to be performed. At 217, if the operation is "car configuration adjustment", the process proceeds at 218. If the operation is not "car configuration adjustment," the process proceeds at 221. If the owner status is set to "passenger" at 218, then the vehicle system is not allowed to perform operations at 220. If the owner status is not set to "passenger" at 218, the vehicle system allows the operation to be performed at 219. At 205, if the operation is a "warning flash," the process proceeds at 222. If the owner status is set to "driver" at 222, the vehicle system allows the operation to be performed at 224. If the owner status is not set to "driver" at 222, the vehicle system is not allowed to perform operations at 223.
In the embodiment of fig. 2, the in-vehicle imaging system obtains information about an active hand (e.g., a driver's hand or a passenger's hand) interacting with entertainment system 203. Entertainment system 203 may allow passengers to perform interactions that the driver cannot perform. Furthermore, entertainment system 203 may, for example, allow a driver to perform interactions that passengers should not access, such as adjusting the configuration of an automobile.
Fig. 4 schematically shows an embodiment of a process of selection of an operating mode of the vehicle based on the state of the owner of the vehicle. Based on this information of the active hand, the automotive system may comprise three modes of operation, namely a driver-centric interaction mode, a passenger-centric interaction mode and a traditional interaction mode independent of the movements of the owner of the hand.
At 225, a hand owner status is detected. If the hander status is set to "driver" at 226, then the mode of operation is set to driver-centric interaction mode at 227. If the owner status is not set to "driver," the process proceeds at 228. If the hander status is set to "passenger" at 228, the operational mode is set to passenger-centric interaction mode at 229. If the hander status is not set to "passenger," the process proceeds at 230. At 230, the operational mode is set to a conventional interaction mode.
The hand-owner identification and hand-owner status determination performed by the processor 202 is performed based on calculations of hand parameters and history, as described below in fig. 8-20. The hand-owner identification may combine single-frame analysis and frame history for hand analysis, arm analysis, and rule-based analysis.
Fig. 5 schematically illustrates an embodiment of a ToF imaging system in an in-vehicle scene, wherein images captured by the ToF imaging system are used for hander identification.
The ToF imaging system 200 (see fig. 2), for example, mounted on the vehicle ceiling, includes an iToF sensor that captures an in-vehicle scene by actively illuminating its field of view 201 inside the vehicle with a pulse of light. The ToF imaging system 200 captures a confidence image and a depth map (e.g., depth image) of the interior compartment of the vehicle by analyzing the time of flight of the emitted infrared light. For example, the ToF imaging system 200 captures a human-machine interface (HMI) 301 of the vehicle within its field of view 201, which is associated with the infotainment system (203 in fig. 2) of the vehicle described above. Further, toF imaging system 200 captures the front seat occupant's hand, front seat occupant's leg, driver's hand (e.g., active hand 303), driver's leg, steering wheel (e.g., steering wheel 302 of the vehicle), and so forth within its field of view 201.
Based on the captured image of the ToF imaging system 200, the owner of the detected active hand (e.g., active hand 303) is determined. In order for the ToF imaging system 200 to detect the owner of the active hand 303, the iToF sensor depth image and/or iToF sensor confidence image is analyzed, for example, by defining a region of interest (ROI) (e.g., the driver steering wheel region 300) in the field of view 201 of the iToF sensor. The driver steering wheel region 300 corresponds to the same region in the captured image, i.e., the ROI.
The iToF sensor of the ton imaging system (see 200 in fig. 2 and 5) obtains a depth image (see fig. 6a and 6 b) by capturing its field of view (see 201 in fig. 2 and 5). A depth image is an image or image channel that contains information about the true distance of the object surface in the scene from the viewpoint (i.e., from the iToF sensor). The depth (true distance) can be measured by the phase delay of the return signal. Thus, the depth image may be determined directly from the phase image, which is the set of all phase delays determined in the pixels of the iToF sensor.
Occupancy detection
Fig. 6a shows in more detail an embodiment of a depth image obtained by an in-vehicle ToF imaging system for vehicle seat occupancy detection, wherein the depth image shows that the seat of the passenger is occupied. Here, a console of the vehicle and a leg 400 of a passenger located on the right side of the console are depicted by capturing a depth image of a cabin of the vehicle. In the embodiment of fig. 6a, only one person is detected, and thus the seat occupant is a passenger.
Fig. 6b shows in more detail an embodiment of a depth image obtained by an in-vehicle ToF imaging system for vehicle seat occupancy detection, wherein the depth image shows that the driver's seat is occupied. Here, the depth image depicts the console of the vehicle and the driver's leg 401 located to the left of the console. In the embodiment of fig. 6b, only one person is detected, so the seat occupant is the driver.
The depth images in fig. 6a and 6b obtained by the ToF imaging system 200 are analyzed to detect a driver's car seat occupancy and/or a passenger's car seat occupancy. For example, analysis may be performed by removing the background in the depth image using a previously made reference image. From the background image, wherein only the static portion of the field of view remains, the blobs for each driver and passenger area are calculated. The speckle corresponds to the surface of an object in the depth range and is static. In case the spot size is satisfactory with respect to the threshold value, the presence of the driver and/or the passenger is determined. This analysis detects whether there are any occupants in the car seat.
In case only one person is detected in the car, the final decision is obvious, i.e. the car seat is occupied by the driver or the passenger. In case only one person, i.e. the driver or the passenger, is on the vehicle, it is not necessary to perform any further driver/passenger detection. In addition, false positives and false negatives in the detection of the hand-owner status can be prevented, and filtering for final decision about the hand-owner status can be prepared.
In the embodiment of fig. 6a and 6b, the car seat occupancy detection is performed based on the depth image. Alternatively, the vehicle seat occupancy detection may be performed using a seat pressure sensor or the like embedded in each seat of the vehicle. Still alternatively, the car seat occupancy detection is optional, and the person skilled in the art may not perform the occupancy detection of the seats of the vehicle.
Hand detection
Fig. 7 shows a depth image generated by a ToF imaging system capturing a scene in a vehicle-mounted scene, wherein a live hand is detected in the depth image. The captured scene includes a right hand 501 of the driver of the vehicle and a right leg 502 of the driver. An object/hand recognition method is performed on the depth image to track an active hand, such as hand 501. In the case of a hand being detected, an active bounding box 500 relating to the detected hand 501 in the depth image is determined and provided by the object/hand detection process.
Fig. 7 shows only a sub-portion of a depth image captured by the in-vehicle ToF imaging system. Object detection, such as hand detection, is performed by a hand-owner identification process.
The object detection may be performed based on any object detection method known to those skilled in the art. An exemplary object detection method is described in the paper "sliding shape for 3D object detection in depth image (Sliding Shapes for 3D Object Detection in Depth Images)" published in 13 th european computer vision conference (ECCV 2014) agenda by shanan Song and jianxin Xiao.
Owner identification
Fig. 8 schematically depicts an embodiment of a hand-owner identification process. The ToF imaging system (see 200 in fig. 2 and 5) illuminates the in-vehicle scene within its field of view (see 201 in fig. 2 and 5) and captures an image, such as a depth image. A region of interest (ROI) is defined in the depth image, such as a driver steering wheel region in the field of view of the ToF imaging system (see 300 in fig. 5).
At 600, a predefined driver steering wheel zone is obtained. The predefined driver steering wheel region corresponds to the same area in the captured image, i.e. the predefined ROI. The predefined ROI may be set in advance (e.g., manufacturing, system setup, etc.) as predefined parameters of the process. At 601, a predefined minimum number of frames m is obtained in which an active hand should be identified in the driver steering wheel zone. The minimum number of frames m is set such that if the identified active hand is at least partially within the driver's steering wheel zone, the identified active hand is considered to be the driver's hand. At 602, a number of frames n is counted, and an active hand is identified in the driver steering wheel zone at least in part in the frames of frame number n. The n frames may be consecutive frames, in which case the present embodiment is not limited. At 603, if the number of active hands n identified in the driver steering wheel zone at least in part in the frame of number n obtained at 602 is higher than the predefined minimum number of frames m that the active hands should be identified in the driver steering wheel zone in the frame of predefined minimum number of frames m obtained at 601, then the method proceeds at 604. At 604, a determination is made indicating that the owner of the active hand is the owner status of the driver's hand.
Image analysis
Fig. 9a schematically illustrates an embodiment of an image analysis process performed on an image captured by an in-vehicle ToF imaging system.
Images captured by an in-vehicle ToF imaging system, such as captured image 700, are subjected to image analysis 701 to obtain detected tip position 702, palm position 703, and arm position 704 of the active hand. Arm position 704 includes information about the lower arm position (see fig. 7 b). Image analysis 701 may include a process of image segmentation to detect hands and arms in a captured image.
The palm position 703 may be estimated, for example, by image analysis 701 by calculating the center of gravity of a two-dimensional (2D) palm detected in a depth image generated by the ToF imaging system, in which case the present embodiment is not limited. Alternatively or additionally, the palm position may be determined using a confidence image generated by the ToF imaging system (see 200 in fig. 2 and 5). The palm orientation may also be obtained by applying principal component analysis to the palm detected by image segmentation and analysis.
For example, arm position 704 may be detected in combination with the identified active hand where it enters the field of view (see 201 in fig. 2 and 5). The identified position of the active hand into the field of view is denoted herein as the lower arm position (see fig. 7 b).
Seat occupant detection may be performed as described with respect to fig. 6a and 6 b. The process of image segmentation may be performed as described with respect to fig. 7.
Any desired information for performing the owner identification may be extracted by those skilled in the art through an image analysis process. For example, the image analysis 701 for obtaining a fingertip position may be any image analysis method known to a person skilled in the art. An exemplary image analysis method is described in patent document WO 2019/134888 A1 SONY corporation (SONY corp.) 2019, 7-11 (11.07.2019), wherein an exemplary gesture recognition algorithm is used to extract feature points, such as fingertips, etc., detected in a captured image.
Another exemplary image analysis method for obtaining hand parameters such as finger tip position, palm position, arm position, hand and finger posture, etc. is described in patent document WO 2015/104257 A1 SONY corporation (SONY corp.) 2015, 7 months 16 days (16.07.2015), wherein a detected point of interest (POI) in a user's hand is determined by selecting at least one of palm center, hand tip, finger tip, etc.
In the embodiment of fig. 9a, the segmentation process performed in the captured image may be a pixel segmentation performed on a two-dimensional (2D) image or a three-dimensional (3D) image to extract information for generating a state of the hand owner, such as a tip position 702, a palm position 703 and an arm position 704. Other information such as fingertip position and orientation, palm position and orientation, arm position and orientation, hand and finger pose, hand bounding box, etc. may also be obtained by image analysis, in which case the present embodiment is not limited.
Fig. 9b shows the lower arm analysis results, wherein the position of the bottom of the arm is determined. The active hand 706 is identified and a position of an arm coupled to the identified hand 706 is detected, wherein the active hand 706 enters the field of view of the ToF sensor. In the embodiment of fig. 9b, the lower arm position is determined by calculating the centroid (i.e., the average center) of the arm profile in the lower arm region 705 of the field of view. The lower arm region 705 is the edge of the captured image closest to the rear of the vehicle. The average center may be calculated from the contours of the hand, and the contours of the hand may be estimated from the hand segmentation. The contour of the arm, i.e., the contour of the hand, may be calculated over a height of 14 pixels and the same width as that of the captured image, in which case the present embodiment is not limited. Alternatively, the profile of the arm may be considered to be within 14 pixels of height, regardless of width, without limiting the present embodiment in that regard. Still alternatively, the height and width may be any suitable height and width selected by one of skill in the art.
Fig. 10 schematically shows an embodiment of an arm angle determination process performed to obtain an arm angle.
Based on the palm position 703 and the arm position 704 (see 701 in fig. 9 a) acquired by image analysis, an arm angle determination 800 is performed to obtain a detected arm angle, such as arm angle 801.
The arm angle determination 800 includes detecting the arm angle when considering a vertical line (i.e., a separation line), dividing the captured image into two parts as an angle of 0 °. The arm angle is determined from the separation line (see 900 in fig. 11a and 11 b) by taking into account the slope of the separation line, i.e. the arm angle (see 901 in fig. 11a and 11 b). Arm angles, such as confidence images and/or RGB images, may be determined in the captured 2D images. In this case, the arm vector in the 2D image directly defines the arm angle. Arm angles may also be determined in the captured 3D depth image. In this case, the arm direction, i.e. the arm orientation, may be determined in 3D (vector) from the depth image generated by the ToF imaging system (see 902 in fig. 11a and 11 b). The orientation of the arm is then projected in 2D onto a confidence image generated by the ToF imaging system, and the arm angle is determined in the 2D image.
In the present embodiment, the arm angle 801 is obtained based on the palm position 703 and the arm position 704, and the present embodiment is not limited in that regard. Alternatively, the arm angle may be calculated from the direction of the finger and relative to the separation line between the driver/passenger areas. In this case, the direction of the finger may be calculated by applying principal component analysis to a three-dimensional (3D) point cloud or the like. Still alternatively, the arm angle may be calculated from the fingertip-palm direction and the arm position, wherein the fingertip-palm direction may be calculated based on the fingertip position, the palm position, and the like.
Fig. 11a schematically shows an embodiment of a hand-owner status determination in more detail, wherein the hand-owner status indicates that the detected hand belongs to the driver, and fig. 11b schematically shows an embodiment of a hand-owner status determination in more detail, wherein the hand-owner status indicates that the detected hand belongs to the front seat passenger. For example, such hand parameters are fingertip position 702, palm position 703, arm position 704, arm angle 801 acquired by image analysis 701 and arm angle determination 800, as described with respect to fig. 9a and 10, respectively.
An in-vehicle ToF imaging system (see 200 in fig. 2 and 5) captures a scene within its field of view 201 to obtain a captured image. The scene within its field of view 201 includes HMI 301, a portion of steering wheel 302 and the active hand, here the right hand of the driver. In the captured image, a driver steering wheel zone 300 is defined that corresponds to the same area in the scene. The detected fingertip position 702, palm position 703 and lower arm position 704 of the active hand are obtained by image analysis (701 in fig. 9 a). Based on the detected fingertip position 702, palm position 703 and lower arm position 704 of the active hand, an arm angle 801 is acquired by an arm angle determination process 800 (see fig. 10). Arm angle 801 includes arm angle 901 and arm orientation 902. Arm orientation 902 is the detected orientation of the active hand (see 303 in fig. 5) determined based on lower arm position 704 and palm position 703. Here, the arm orientation 902 is indicated by a dashed line. The arm angle 901 is the angle formed between the arm orientation 902 and the separation line 900, which divides the captured image into two parts, thereby dividing the captured scene into two parts. Here, the arm angle 901 is indicated by a double arrow. The lower arm position 704 is the position of the lower arm within a predefined area, such as lower arm area 903, which is a predefined threshold. The lower arm region 903 is defined as the top edge region of the captured image, which corresponds to the edge region closest to the rear of the captured scene and thus closest to the rear of the vehicle. The predefined threshold may be a 16 pixel threshold, or 5% of the image height, etc., in which case the present embodiment is not limited.
In the embodiment of fig. 11a, the arm angle 901 is positive with respect to the separation line 900, so that the arm orientation 902 points from left to right in fig. 11a, from the perspective of the ToF sensor, from the left to right of the scene captured by the ToF imaging system (see 200 in fig. 2 and 5). Thus, the hander status is identified as the driver.
Thus, in the embodiment of fig. 11b, the arm angle 901 is negative with respect to the separation line 900, so that the arm orientation 902 points from right to left in fig. 11a, the arm orientation 902 being from the right to the left part of the scene captured by the ToF imaging system (see 200 in fig. 2 and 5) from the perspective of the ToF sensor. Thus, the hander status is identified as a passenger.
In the embodiment of fig. 11a and 11b, the separation line 900 is a vertical line, in which case the present embodiment is not limited. Alternatively, the separation line may be a diagonal separation line, such as separation line 2200 described with respect to fig. 24. The separation line 900 may be an angle of 0 °. The arm angle 901 may be an angle of 30 ° (left part of the scene), the arm angle 901 may be an angle of (-) 30 ° (right part of the scene), and the like, in which case the present embodiment is not limited.
Fingertip analysis
Fig. 12a schematically shows an embodiment of a fingertip analysis procedure based on tip criteria to obtain a tip score. Based on the tip position 702, a fingertip analysis 1000 is performed to obtain a fraction 1001 of the tip, e.g. tip i Where i is the hander status, e.g. i=d for driver and i=p for passenger. Tip fraction tip i Is a score calculated from fingertip analysis 1000 (tip criteria) and is used for state score calculation, as described in more detail below with respect to fig. 16.
The tip-palm direction is determined based on the tip position 702 and the palm position 703, both of which are acquired during the image analysis described with respect to fig. 9 a. The fingertip-palm direction, which is the finger direction, is obtained by applying principal component analysis to a 3D point cloud or the like.
Fig. 12b schematically shows an embodiment of a finger gesture detection result obtained based on the detected tip position 702 and palm position 703. The finger posture detection result is a result of one finger (1F) or two finger (2F) posture detection. The detector locates the detected tip position, i.e., tip position 702, from the palm center (i.e., palm position 703). This results in first information about the owner.
The finger gesture is also analyzed during frame overlap, e.g., 20 by default, and reset if no other 1F/2F is detected again. Based on the tip and palm positions, a tip-palm direction is determined, wherein a specific range of tip-palm directions exists for each of the passenger and driver hands. As described in the embodiment of fig. 9a above, palm position 703 is estimated by calculating the center of gravity of the 2D palm, and palm orientation may be obtained by applying principal component analysis to the segmented palm.
Arm analysis
Fig. 13 schematically illustrates an embodiment of an arm analysis process to obtain hand parameters.
Based on the detected palm position 703, (lower) arm position 704 and arm angle 801 of the active hand, an arm analysis 1100 of the detected active hand is performed to obtain hand parameters. The hand parameters include a score 1101 of the palm, e.g., palm i Score 1102 of the bottom of the arm, e.g. bottom i And a score 1103 of the angle, e.g. angle i Where i is the hander status, e.g. i=d for driver and i=p for passenger. Arm analysis 1100 includes palm position criteria, lower arm criteria, and arm angle criteria such that palm is calculated based on the palm position criteria i Bottom calculation based on lower arm standard i And computes angle based on arm angle criteria i
Palm position criteria are criteria intended to distinguish a driver's arm from one of the passengers' arms. By determining the palm (i.e. palm i ) A kind of electronic deviceScore 1101.
The lower arm standard is a standard intended to distinguish the lower arm of the driver from the lower arm of the passenger. By determining the lower arm (i.e. bottom i ) Is performed by the score 1102 of (c).
The arm angle criterion is a criterion intended to distinguish the arm angle of the driver from the arm angle of the passenger. By determining the arm angle (i.e. angle i ) Is performed by the score 1103. For example, when a separation line that divides a captured image into two parts is considered as an angle of 0 °, the arm angle is determined. The sin of the angle contributes to the final score, which is the score that determines the owner of the hand, and thus contributes to the identified state of the owner of the active hand. For example, in the case of Left Hand Driving (LHD), when a positive arm angle is detected, the owner is located in the right portion of the captured image, which gives more weight to the passenger, i.e., angle P . In the LHD case, when a negative arm angle is detected, the owner is located in the left part of the captured image, which gives the driver more weight, i.e. angle D
Arm voting results
Fig. 14a schematically shows an embodiment of an arm voting process performed based on hand parameters to obtain an arm voting result.
Based on the detected palm position 703, (lower) arm position 704 and arm angle 801 of the active hand, arm voting 1200 is performed to obtain an arm voting result 1201. Arm vote result 1201 is a true or false value, i.e., a boolean value. Arm voting 1200 is implemented to avoid false positive hand states by analyzing arm criteria, i.e., palm position criteria, lower arm criteria, and arm angle criteria, as described above in fig. 13. The output of the arm vote 1200 is a boolean value that affects the obtained calculated state scores as described below in fig. 16, and thus the determination of the hand owner state as described below in fig. 17-19.
Fig. 14b schematically shows an embodiment of arm voting as performed in fig. 14a above. At 1202, the obtained lower arm position (see 704 in fig. 14 a) is compared to a threshold value. If the lower arm position is less than the threshold, then at 1202 the value is set to true and the voting result 1203 is therefore attributed to the driver. If the lower arm position is greater than the threshold, then at 1202 the value is set to false, and thus the voting result 1204 is attributed to the passenger.
Fig. 14c schematically shows another embodiment of arm voting as performed in fig. 14a above. At 1205, the arm voting results are attributed to the driver 1206, the passenger 1207, or the unknowns 1208 based on the obtained arm angular position (see 801 in fig. 10) and the obtained palm position (see 703 in fig. 9 a).
Arm vote 1200 is implemented to avoid false positive hand owner status by analyzing arm criteria. Arm vote 1200 requires a separation line (see 900 in fig. 11a and 11 b) that is defined over the captured image and separates the captured image into two parts.
The embodiment of fig. 15a and 15b shows in more detail how the voting results attribute during the arm voting process described in relation to fig. 14a, based on the angle formed between the arm of the detected hand and the separation line 900. The separation line 900 separates the captured image into a left portion and a right portion. In the LHD configuration, the area located at the left portion of the captured image is the driver area and the area located at the right portion of the captured image is the passenger area. The angle formed between the separation line 900 and the black bold line 1300 is the angle to which the arm voting result belongs to an unknown boundary value. The boundary angle may be an angle of 5 °, in which case the present embodiment is not limited. The cutoff angle may be 0 degrees (0 °), or the cutoff angle may be any suitable angle selected by one skilled in the art.
In the embodiment of fig. 15a, the palm position is located in the left part of the captured image, and thus the voting result is attributed to the driver. The arm angle 901 is positive, i.e. the arm angle >0 °, and thus the voting result is attributed to the passenger. The lower arm region 903 is located at the right part of the captured image, and thus the voting result is attributed to the passenger. The separation line 900 may be an angle of 0 °. The arm angle 901 may be an angle of 30 °, in which case the present embodiment is not limited.
In the embodiment of fig. 15b, the palm position is located in the right part of the captured image, and thus the voting result is attributed to the passenger. The arm angle 901 is negative and the arm angle is <0 deg., so the voting result is attributed to the driver. The lower arm region 903 is located at the left part of the captured image, and thus the voting result is attributed to the driver. The arm angle 901 may be an angle of (-) 30 ° (right part of the scene) or the like, and the present embodiment is not limited in that respect.
In the LHD configuration, for example, if a palm position is detected in the left part of the captured image and a positive angle is also detected, the arm is considered to be the arm of the passenger by voting on the angle, for example, arm_right. Otherwise, the arm position is considered from the driver by voting on the position, for example, arm_left_has_palm_position_vol=true. Then, using the lower arm criteria, if an arm position is detected in the left part of the image, which is the position where the arm enters the captured image, it is considered to belong to the driver. Thus, the voting result belongs to a location, for example, arm_left.
State score calculation/score determination
Fig. 16 schematically shows an embodiment of the score determination process, in which two state scores, i.e. the score of the driver and the score of the passenger, are calculated based on the results of the previously calculated criteria.
Based on the arm voting result 1201 and the hand parameter palm i 、bottom i And angle i Tip parameter tip i Two state scores 1401 and 1402, namely the Score of the driver, are calculated D And score of the passenger P . Status score i Is a score calculated and used to identify the status of the owner of the hand.
In the embodiment of FIG. 16, the state score i Is that
Wherein, the liquid crystal display device comprises a liquid crystal display device,is the calculated historical average dominant owner, l i Is the last state of history, tip i Is the score calculated from the hand tip criteria, palm i Is the score calculated from the palm position standard i Is the score calculated from the lower arm standard, angle i Is the fraction, w, calculated from the arm angle criterion h Is the historical weight, w t Is the tip weight, w palm Is the palm position weight, w angle Is the palm angle weight.
As can be seen from the above formula, a weight is applied to each component and all weights are normalized.
Historical average principal ownerIs calculated taking into account the global historical host values. The global history owner is the average of the number of drivers/passengers detected during the (previous) active frames in the history.
Global historical owner value accounts for historical owner score50% of (3).
History last state l i Adding the last state to the final score, e.g., if the last state is set to be unknown host, then l i =0。
To calculate the historical weight w h The first score is calculated in two parts using the owner history. Then, the last owner takes over the historical owner score50% of (3).
Calculation of weighted palm position w from arm voting process palm And weighted palm angle w angle As described above with respect to fig. 14 a-15 b.
FIG. 17 illustrates a flow chart of a method of visualizing a state of a hand owner for determining an identified active hand, wherein a calculated score of a driver and a score of a passenger are compared.
At 1500, driving is determined using a comparison operatorScore of member D (see FIG. 16) and score of the passenger P (see fig. 16) whether or not they are equal. At 1500, if Score D Equal to score P The method proceeds at 1501 where, at 1501, the homeowner status is set to unknown. At 1500, if Score D Not equal to score P The method proceeds at 1502. At 1502, if the difference |score D -score P I is above a threshold e, where e.g., e=0.1, then the method proceeds at 1504. At 1502, if the difference |score D -score P I is below a threshold e, where e.g. e=0.1, the method proceeds at 1503. At 1503, the hand owner status is set to last known. At 1504, if the identified active hand passes through an area of the driver, such as the driver steering wheel zone (see 300 in fig. 5, 11a and 11 b), the method proceeds at 1505. At 1504, if the identified active hand does not pass through an area of the driver, such as the driver steering wheel zone (see 300 in fig. 5, 11a, and 11 b), the method proceeds at 1506. At 1506, if the driver's Score D Score above passenger P The method proceeds at 1505. At 1506, if the driver's Score D Score below passenger P The method proceeds at 1507. At 1505, the hander status is set to driver. At 1507, the hander status is set to passenger. After 1506, in the event that the difference between the scores is too low, the process returns to 1502 and the process is repeatedly performed.
Generation of a state of a homeowner
Fig. 18 shows a flow chart of a method for visualizing the status of a user for an identified active hand in a captured image, as described above with respect to fig. 5-17.
At 1600, a driver steering wheel zone in the captured image is obtained (see 300 in fig. 5, 11a, and 11 b). At 1601, if an active hand is detected in the driver steering wheel zone in at least m frames (see fig. 8), the method proceeds at 1602. At 1602, a handowner is identified, and a handowner status is set to a driver. At 1601, if not in at least m frames (see FIG. 8)If an active hand is detected in the driver steering wheel zone, the method proceeds at 1603. At 1603, tip position (see 702 in fig. 9a and 12 a), palm position (see 703 in fig. 9a and 14 a), and arm position (see 704 in fig. 9a and 14 a) are analyzed based on tip criteria, palm criteria, and arm criteria, respectively, to obtain a Score for calculating a driver D And score of the passenger P Fractional tip of (a) i 、palm i 、bottom i And angle i . At 1604, a Score of the driver is calculated D And score of the passenger P (see FIG. 16). At 1605, if the difference |score D -score P I is above a threshold e, where e.g., e=0.1, then the method proceeds at 1607. At 1605, if the difference |score D -score P I is below a threshold epsilon, where e.g., epsilon=0.1, then the method proceeds at 1606. At 1606, the owner of the hand is identified and the owner status is set to unknown. At 1607, if the driver's Score D Score above passenger P The method proceeds at 1608. At 1607, if the driver's Score D Score below passenger P The method proceeds at 1609. At 1608, the owner of the hand is identified and the owner status is set to the driver. At 1609, the hander is identified and the hander status is set to passenger.
Fig. 19 shows a flow chart of a method of visualizing status recognition for a hander, wherein historical statistics of the hander are calculated and arm voting and right-hand driving (RHD) switching is performed.
At 1700, a 2D image and/or confidence image is obtained. At 1701, if the value indicating "hand on steering wheel" in continuous mode is above the threshold, the method proceeds at 1708. If the value indicating "hand on steering" in continuous mode is below the threshold, e.g., 20 frames, the method proceeds at 1703. At 1702, the value of the "hand on steering wheel" variable is incremented by 1, which is used at 1601 in FIG. 18 above. At 1703, a special purpose owner detection pipeline is used, wherein the special purpose owner detection pipeline includes steps 1704 through 1707. At 1704, based on tip criteria (see FIG. 12 a),Arm criteria (see fig. 14a, 14b, 14 c) and arm voting results (see fig. 12 a) to calculate tip and hand parameters to obtain Score for calculating the driver D And score of the passenger P Fractional tip of (a) i 、palm i 、bottom i And angle i . At 1705, a Score of the driver is calculated D And score of the passenger P (see FIG. 16). At 1708, if a hand is identified on the steering wheel, the method proceeds at 1712. At 1708, if no hand is identified on the steering wheel, the method proceeds at 1709. At 1709, historical statistics for use at 1706 are calculated, and the method proceeds at 1706. At 1706, a Score based on the comparison D 、score P A determination process (see fig. 17) is performed to obtain a state of the owner of the hand, such as the driver, such as the passenger, or such as unknown. At 1707, the result of the determination process may be reversed, if desired, depending on the driving configuration, such as LHD or RHD. That is, the hander states that the driver becomes a passenger, and the passenger becomes a driver. Whether or not the LHD/RHD switch, the result of the decision process is a homeowner state, i.e., driver, passenger or unknown. In the embodiment of fig. 17, the RHD handoff at 1707 is optional. When an active hand touch to the steering wheel is detected for a plurality of frames, for example 20 frames, the continuous mode is activated. The historical statistics calculated at 1709 are scores for calculating the scores of the driver and passenger described above with respect to fig. 16 l i
FIG. 20 illustrates a flow chart of an embodiment of a method of visualizing a state for a homeowner. At 1800, an image is acquired by a ToF sensor (see 200 in fig. 2 and 5) of the ToF imaging system that captures a scene within a field of view (see 201 in fig. 2 and 5) of the ToF imaging system, for example, in an in-vehicle scene. At 1801, identification of a live hand in the image is performed. At 1802, a hand owner status of an identified hand is generated based on the active hand detected and identified in the image captured at 1801. As described above in fig. 17, 18 and 19, the hander status may be, for example, driver, for example, passenger, for example, unknown or for example last known.
Implementation of the embodiments
Fig. 21 shows a block diagram depicting an example of a schematic configuration of a vehicle control system 7000 as an example of a mobile body control system to which the technology according to the embodiment of the present disclosure can be applied. The vehicle control system 7000 includes a plurality of electronic control units connected to each other via a communication network 7010. In the example depicted in fig. 21, the vehicle control system 7000 includes a drive system control unit 7100, a vehicle body system control unit 7200, a battery control unit 7300, an off-vehicle information detection unit 7400, an in-vehicle information detection unit 7500, and an integrated control unit 7600. For example, the communication network 7010 that connects a plurality of control units to each other may be an in-vehicle communication network conforming to any standard, such as a Controller Area Network (CAN), a Local Interconnect Network (LIN), a Local Area Network (LAN), or FlexRay (registered trademark), or the like.
Each control unit includes: a microcomputer that performs arithmetic processing according to various programs; a storage section that stores a program executed by a microcomputer, parameters for various operations, and the like; and a driving circuit that drives the various control-target devices. Each control unit further comprises: a network interface (I/F) for performing communication with other control units via a communication network 7010; and a communication I/F for communicating with devices, sensors, etc. inside and outside the vehicle by wired communication or radio communication. The functional configuration of the integrated control unit 7600 shown in fig. 21 includes a microcomputer 7610, a general-purpose communication I/F7620, a special-purpose communication I/F7630, a positioning portion 7640, a beacon receiving portion 7650, an in-vehicle device I/F7660, a sound/image outputting portion 7670, an in-vehicle network I/F7680, and a storage portion 7690. Similarly, other control units include a microcomputer, a communication I/F, a storage section, and the like.
The drive system control unit 7100 controls the operation of devices related to the drive system of the vehicle according to various programs. The drive system control unit 7100 may have a function as a control device of an Antilock Brake System (ABS), an Electronic Stability Control (ESC), or the like.
The drive system control unit 7100 is connected to a vehicle state detection portion 7110. The drive system control unit 7100 performs arithmetic processing using a signal input from the vehicle state detection portion 7110, and controls an internal combustion engine, a drive motor, an electric power steering apparatus, a brake apparatus, and the like.
The vehicle body system control unit 7200 controls the operation of various devices provided on the vehicle body according to various programs. For example, the vehicle body system control unit 7200 functions as a control device for a keyless entry system, a smart key system, a power window device, or various lamps such as a headlight, a reversing lamp, a brake lamp, a turn lamp, a fog lamp, or the like.
The battery control unit 7300 controls a secondary battery 7310 as a power source for driving the motor according to various programs.
The outside-vehicle information detection unit 7400 detects information about the outside of the vehicle including the vehicle control system 7000. For example, the outside-vehicle information detection unit 7400 is connected to at least one of the imaging portion 7410 and the outside-vehicle information detection portion 7420. The imaging portion 7410 includes at least one of a time-of-flight (ToF) camera, a stereoscopic camera, a monocular camera, an infrared camera, and other cameras. For example, the outside-vehicle information detecting portion 7420 includes at least one of an environmental sensor for detecting a current atmospheric condition or weather condition and a surrounding information detecting sensor for detecting another vehicle, an obstacle, a pedestrian, or the like around the vehicle including the vehicle control system 7000.
The in-vehicle information detection unit 7500 detects information about the inside of the vehicle. The in-vehicle information detection unit 7500 may collect any information related to a situation related to the vehicle. The in-vehicle information detection unit 7500 is connected to, for example, a driver and/or passenger state detection portion 7510 that detects a state of a driver and/or passenger. The driver state detection portion 7510 may include a camera that images the driver, a biosensor that detects biological information of the driver, a microphone that collects sounds of the interior of the vehicle, and the like. For example, a biosensor is arranged in a seat surface, a steering wheel, or the like, and detects biological information of an occupant sitting in the seat or a driver holding the steering wheel.
The integrated control unit 7600 controls general operations within the vehicle control system 7000 according to various programs. The integrated control unit 7600 is connected to the input portion 7800. The input portion 7800 is implemented by a device capable of being input to operate by a passenger, for example, a touch panel, a button, a microphone, a switch, a lever, or the like. The integrated control unit 7600 may be supplied with data obtained by performing voice recognition on voice input using a microphone. For example, the input portion 7800 may be a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a Personal Digital Assistant (PDA) or the like that supports the operation of the vehicle control system 7000. For example, the input portion 7800 may be a camera. In this case, the passenger may input information through a gesture. Alternatively, data obtained by detecting movement of a wearable device worn by the passenger may be input. Further, for example, the input portion 7800 may include an input control circuit or the like that generates an input signal based on information input by a passenger or the like using the above-described input portion 7800, and outputs the generated input signal to the integrated control unit 7600. The passenger or the like inputs various data or gives instructions for processing operations to the vehicle control system 7000 through the operation input portion 7800.
The storage portion 7690 may include a Read Only Memory (ROM) storing various programs executed by the microcomputer and a Random Access Memory (RAM) storing various parameters, operation results, sensor values, and the like. Further, the storage portion 7690 may be implemented by a magnetic storage device such as a Hard Disk Drive (HDD) or the like, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
The general purpose communication I/F7620 is a general purpose communication I/F that coordinates communication with various devices that exist in the external environment 7750. The general communication I/F7620 may implement a cellular communication protocol such as global system for mobile communications (GSM (registered trademark)), worldwide interoperability for microwave access (WiMAX (registered trademark), or another wireless communication protocol such as wireless LAN (also referred to as wireless fidelity (Wi-Fi (registered trademark)), bluetooth (registered trademark)), or the like.
The dedicated communication I/F7630 is a communication I/F that supports development of a communication protocol for use in a vehicle. For example, the dedicated communication I/F7630 may implement a standard protocol, such as wireless access in a vehicle environment (WAVE), which is a combination of Institute of Electrical and Electronics Engineers (IEEE) 802.11p as a lower layer and IEEE 1609 as a higher layer, dedicated Short Range Communication (DSRC), or cellular communication protocol. The dedicated communication I/F7630 generally performs V2X communication as a concept including one or more of communication between a vehicle and a vehicle (vehicle-to-vehicle), communication between a road and a vehicle (vehicle-to-infrastructure), communication between a vehicle and a house (vehicle-to-house), and communication between a pedestrian and a vehicle (vehicle-to-pedestrian).
For example, the positioning section 7640 performs positioning by receiving Global Navigation Satellite System (GNSS) signals from GNSS satellites (e.g., GPS signals from Global Positioning System (GPS) satellites), and generates position information including latitude, longitude, and altitude of the vehicle. Incidentally, the positioning section 7640 may recognize the current position by exchanging signals with a wireless access point, or may obtain position information from a terminal such as a mobile phone, a Personal Handyphone System (PHS), or a smart phone having a positioning function.
For example, the beacon receiving portion 7650 receives radio waves or electromagnetic waves transmitted from a radio station installed on a road or the like, thereby obtaining information about the current position, congestion, a closed road, necessary time, or the like. Incidentally, the function of the beacon receiving portion 7650 may be included in the above-described dedicated communication I/F7630.
The in-vehicle apparatus I/F7660 is a communication interface that coordinates connection between the microcomputer 7610 and various in-vehicle apparatuses 7760 existing inside the vehicle. The in-vehicle device I/F7660 may establish a wireless connection using a wireless communication protocol, such as wireless LAN, bluetooth (registered trademark), near Field Communication (NFC), or Wireless Universal Serial Bus (WUSB). Further, the in-vehicle apparatus I/F7660 may establish a wired connection through a Universal Serial Bus (USB), a high-definition multimedia interface (HDMI (registered trademark)), a mobile high-definition link (MHL), or the like via a connection terminal (not shown in the figure) (and a cable if necessary). For example, the in-vehicle device 7760 may include at least one of a mobile device and a wearable device owned by an occupant, and an information device carried into or attached to the vehicle. The in-vehicle device 7760 may also include a navigation device that searches for a path to any destination. The in-vehicle devices I/F7660 exchange control signals or data signals with these in-vehicle devices 7760.
The in-vehicle network I/F7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The in-vehicle network I/F7680 transmits and receives signals and the like according to a prescribed protocol supported by the communication network 7010.
The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 according to various programs based on information obtained via at least one of the general-purpose communication I/F7620, the special-purpose communication I/F7630, the positioning portion 7640, the beacon receiving portion 7650, the in-vehicle device I/F7660, and the in-vehicle network I/F7680. The microcomputer 7610 may be the processor 202 of fig. 2, and the microcomputer 7610 may also implement the functions described in more detail in fig. 9a, 9b, 11a, 12a, 13 and 16. For example, based on the obtained information about the inside and outside of the vehicle, the microcomputer 7610 may calculate a control target value of the driving force generating device, the steering mechanism, or the braking device, and output a control command to the driving system control unit 7100. For example, the microcomputer 7610 may perform cooperative control aimed at realizing Advanced Driver Assistance System (ADAS) functions including avoiding a collision or alleviating an impact for the vehicle, driving based on a following distance, driving maintaining a vehicle speed, warning of a collision of the vehicle, warning of a departure of the vehicle from a lane, and the like. Further, the microcomputer 7610 may perform cooperative control intended for automatic driving, which causes the vehicle to run autonomously independent of the operation of the driver, by controlling the driving force generating device, the steering mechanism, the braking device, and the like, based on the obtained information about the surroundings of the vehicle.
The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object (e.g., a surrounding building or a person, etc.) based on information obtained via at least one of the general communication I/F7620, the dedicated communication I/F7630, the positioning portion 7640, the beacon receiving portion 7650, the in-vehicle device I/F7660, or the in-vehicle network I/F7680, and generate local map information including surrounding information about the current position of the vehicle. Further, based on the obtained information, the microcomputer 7610 can predict a hazard, such as a collision of a vehicle, approach of a pedestrian, or the like, or entry into a closed road, or the like, and generate a warning signal. For example, the warning signal may be a signal for generating a warning sound or turning on a warning lamp.
The sound/image outputting portion 7670 transmits an output signal (e.g., a modified audio signal) of at least one of the sound and the image to an output device capable of visually or audibly notifying information to an occupant of the vehicle or an outside of the vehicle. In the example of fig. 21, the audio speaker 7710, the display portion 7720, and the instrument panel 7730 are shown as output devices. For example, the display portion 7720 may include at least one of an in-vehicle display and a head-up display. The display portion 7720 may have an Augmented Reality (AR) display function. The output device may be other devices than these devices, and may be another device, such as a headset, a wearable device, such as a glasses-type display or the like worn by a passenger, a projector, a lamp, or the like. In the case where the output device is a display device, the display device visually displays results obtained by various types of processing performed by the microcomputer 7610 or information received from other control units in various formats such as text, images, tables, charts, and the like. Further, in the case where the output device is an audio output device.
Incidentally, in the example shown in fig. 21, at least two control units connected to each other via the communication network 7010 may be integrated into one control unit. Alternatively, each individual control unit may comprise a plurality of control units. Further, the vehicle control system 7000 may include another control unit not shown in the drawings. Further, some or all of the functions performed by one control unit in the above description may be allocated to another control unit. That is, as long as information is transmitted and received via the communication network 7010, predetermined arithmetic processing can be performed by any control unit. Similarly, a sensor or a device connected to one of the control units may be connected to the other control unit, and a plurality of control units may mutually transmit and receive detection information via the communication network 7010.
Incidentally, a computer program for realizing the functions of the electronic apparatus according to the present embodiment described with reference to fig. 2 and 5 may be implemented in one of the control units and the like. Furthermore, a computer-readable recording medium storing such a computer program may also be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, for example, the above-described computer program may be distributed via a network without using a recording medium.
It should be noted that the above description is merely an example configuration. Alternative configurations may be implemented with additional or other sensors, storage devices, interfaces, etc.
Fig. 22 schematically illustrates an embodiment of a hand-owner detection process performed to adjust car system behavior based on an input user.
A vehicle (e.g., automobile 2100) includes an automobile system setup 2101, an automobile safety system 2102, and an automobile system display 2103. The user 2104 (which is the driver and/or passenger of the automobile 2100) can see what is displayed on the automobile system display 2103. The car system display 2103 is operated by a user's hand (e.g., an active hand), and the hand-owner detector 2105 detects the active hand and identifies the hand owner. The hand owner detector 2105 detects an active hand of the user 2104 based on a hand detection 2106, palm analysis 2107, tip analysis 2108, seat occupant detection 2109 and predefined steering wheel zones of the steering wheel 2110. Results of the process performed by the hand-owner detector 2105 are obtained by the system of the automobile 2100 so that automobile system behavior is adjusted based on the input user.
In the embodiment of fig. 22, the automotive system display 2103 may be included in, for example, the infotainment system 203 described above with respect to fig. 2. The hand owner detector 2105 may be implemented by the processor 202 described above with respect to fig. 2. The hand detection 2106 may be performed as described above in fig. 7. Palm analysis 2107 may be performed as described above in fig. 9a, 9b, 10 and 13. Tip analysis 2108 can be fingertip analysis 1000 as described above in fig. 12 a. The detection 2109 of the seat occupant may be performed as described above in fig. 6a and 6 b. Steering wheel 2110 may be steering wheel 302 described above in fig. 5, 11a, and 11 b.
Fig. 23 shows an embodiment of the separation lines defined in the captured image in more detail. The separation line 2200 is a line that divides an image captured by the in-vehicle ToF imaging system into two parts. In this embodiment, the separation line 2200 is an oblique black line defined in the captured image. Based on the separation line 2200, the identified angle of the active hand may be performed. The position of the separation line may be modified to adjust the sensitivity of the vehicle configuration and/or the method functions of the driver and passengers (modality).
Fig. 24 schematically shows a hand-owner detection result in which the hand-owner status is set as the driver while the hand-owner interacts with the in-vehicle infotainment system. The active hand 2300 is captured by an in-vehicle ToF imaging system (see 200 in fig. 2) while interacting with the vehicle's infotainment system (see 203 in fig. 2). The active hand 2300 is detected by a hand owner detector (see fig. 7) and based on the embodiments described above with respect to fig. 2-19, the hand owner is identified and a hand owner status is generated, where the hand owner status is set as the driver.
***
It should be appreciated that the embodiments describe a method with an exemplary ordering of method steps. However, the particular order of the method steps is presented for illustration purposes only and should not be construed as a constraint.
It should also be noted that the division of the electronic device of fig. 21 into units is for illustration purposes only, and the present disclosure is not limited to any particular division of functionality in a particular unit. For example, at least a portion of the circuitry may be implemented by a corresponding programmed processor, field Programmable Gate Array (FPGA), dedicated circuitry, or the like.
If not otherwise stated, all of the elements and entities described in the present specification and claimed in the appended claims may be implemented as integrated circuit logic, e.g., on a chip, and the functions provided by these elements and entities may be implemented by software if not otherwise stated.
To the extent that the above-disclosed embodiments are implemented, at least in part, using software-controlled data processing apparatus, it will be appreciated that computer programs providing such software control, as well as transmission, storage or other media providing such computer programs, are contemplated as aspects of the present disclosure.
Note that the present technology can also be configured as described below.
(1) An electronic device includes circuitry configured to perform a homeowner identification (1706) based on an image analysis (701) of an image (700) captured by an imaging system (200) to obtain a homeowner status (1710, 1711, 1712).
(2) The electronic device of (1), wherein the circuitry is configured to define the driver steering wheel zone (300) as a region of interest in the captured image (700), and to perform the hand-owner identification (1706) based on the defined driver steering wheel zone (300).
(3) The electronic device of (1) or (2), wherein the circuitry is configured to detect a live hand (303) in a captured image (700), the captured image captured as a field of view (201) of an imaging system (200) of a ToF imaging system, and to perform a hand-owner identification (1706) based on the detected live hand (303).
(4) The electronic device of (2) or (3), wherein the circuitry is configured to define a minimum number of frames (m) in which an active hand (303) should be detected in the driver steering wheel zone (300).
(5) The electronic device of (4), wherein the circuitry is configured to count a number of frames (n), detect an active hand (303) in the driver steering wheel region (300) in a frame of the number of frames n, and perform the hand-owner identification (1706) by comparing a minimum number of frames (m) with the counted number of frames (n).
(6) The electronic device of (5), wherein the circuitry is configured to obtain a hand-owner status (1710, 1711, 1712) indicating that the hand-owner is a driver when the minimum number of frames (m) is less than the counted number of frames (n).
(7) The electronic device of any one of (1) to (6), wherein the circuitry is configured to perform image analysis (701) based on the captured image (700) to obtain a tip position (702), a palm position (703), and an arm position (704) indicative of a lower arm position.
(8) The electronic device according to (7), wherein the circuit is configured to perform the arm angle determination (800) based on the palm position (703) and the lower arm position (704) to obtain the arm angle (801).
(9) The electronic device of (7), wherein the circuitry is configured to perform a fingertip analysis (1000) based on the tip position (702) to obtain a tip score (tip i )。
(10) The electronic device according to (8) or (9), wherein the circuit is configured to perform the arm analysis (1100) based on the palm position (703), the lower arm position (704), and the arm angle (801) to obtain a palm score (palm) i ) Lower arm score (bottom) i ) And arm angle fraction (angle) i )。
(11) The electronic device according to (8) or (10), wherein the circuit is configured to perform the arm voting (1200) based on the palm position (703), the lower arm position (704), and the arm angle (801) to obtain an arm voting result (1201).
(12) The electronic device of (11), wherein the circuitry is configured to determine the tip score (tip) based on the arm voting result (1201) i ) Palm fraction (palm) i ) Lower arm score (bottom) i ) And arm angle fraction (angle) i ) The Score determination (1400) is performed to obtain a Score (Score) D ) And score of passenger (score) P )。
(13) The electronic device according to (12), wherein the circuit is configured to, when the Score (Score D ) Score higher than passenger (score) P ) When a hand owner status is obtained (1710, 1711, 1712) indicating that the hand owner is a driver.
(14) The electronic device according to (12), wherein the circuit is configured to, when the Score (Score D ) Below the multiplicationScore of guest (score) P ) When a hander status is obtained (1710, 1711, 1712) indicating that the hander is a passenger.
(15) The electronic device according to (12), wherein the circuit is configured to, when the Score (Score D ) And score of passenger (score) P ) When the absolute difference of (c) is greater than a threshold (epsilon), a hand-owner status (1710, 1711, 1712) is obtained indicating that the hand-owner is unknown.
(16) The electronic device according to any one of (1) to (15), wherein the circuit is configured to perform seat occupancy detection based on the depth image to obtain the seat occupancy detection state when the captured image (700) is the depth image.
(17) The electronic device of any one of (1) to (16), wherein the circuitry is configured to perform the hand-owner identification (1706) based on a left-hand drive (LHD) configuration or a right-hand drive (RHD) configuration.
(18) A method includes performing a hand-owner identification (1706) based on an image analysis (701) of an image (700) captured by an imaging system (200) to obtain a hand-owner status (1710, 1711, 1712).
(19) A computer program comprising instructions which, when executed by a computer, cause the computer to perform (18) a method.
(20) A non-transitory computer readable recording medium storing a computer program product that, when executed by a processor, causes a computer to perform the method of (18).

Claims (19)

1. An electronic device includes circuitry configured to perform a hand-owner identification (1706) based on an image analysis (701) of an image (700) captured by an imaging system (200) to obtain a hand-owner status (1710, 1711, 1712).
2. The electronic device of claim 1, wherein the circuitry is configured to define a driver steering wheel zone (300) as a region of interest in the captured image (700) and to perform a handowner identification (1706) based on the defined driver steering wheel zone (300).
3. The electronic device of claim 1, wherein the circuitry is configured to detect an active hand (303) in the captured image (700), the captured image captured as a field of view (201) of the imaging system (200) of a ToF imaging system, and to perform a hand-owner identification (1706) based on the detected active hand (303).
4. The electronic device of claim 2, wherein the circuitry is configured to define a minimum number of frames (m) in which an active hand (303) should be detected in the driver steering wheel zone (300).
5. The electronic device of claim 4, wherein the circuitry is configured to count a number of frames (n), detect the active hand (303) in the driver steering wheel zone (300) in the frames of the number of frames (n), and perform a hand-owner identification (1706) by comparing the minimum number of frames (m) to the counted number of frames (n).
6. The electronic device of claim 5, wherein the circuitry is configured to obtain a hand-owner status (1710, 1711, 1712) indicating that the hand-owner is a driver when the minimum number of frames (m) is less than the counted number of frames (n).
7. The electronic device of claim 1, wherein the circuitry is configured to perform image analysis (701) based on the captured image (700) to obtain a tip position (702), a palm position (703), and an arm position (704) indicative of a lower arm position.
8. The electronic device of claim 7, wherein the circuitry is configured to perform an arm angle determination (800) based on the palm position (703) and the lower arm position (704) to obtain an arm angle (801).
9. The electronic device of claim 7, wherein the circuitry is configured toTo perform a fingertip analysis (1000) based on the tip position (702) to obtain a tip fraction (tip) i )。
10. The electronic device of claim 9, wherein the circuitry is configured to perform an arm analysis (1100) based on the palm position (703), the lower arm position (704), and the arm angle (801) to obtain a palm score (palm i ) Lower arm score (bottom) i ) And arm angle fraction (angle) i )。
11. The electronic device of claim 10, wherein the circuitry is configured to perform an arm vote (1200) based on the palm position (703), the lower arm position (704), and the arm angle (801) to obtain an arm vote result (1201).
12. The electronic device of claim 11, wherein the circuitry is configured to determine the tip score (tip) based on the arm voting result (1201) i ) The palm fraction (palm i ) The lower arm fraction (bottom) i ) And the arm angle fraction (angle i ) The Score determination (1400) is performed to obtain a Score (Score) D ) And score of passenger (score) P )。
13. The electronic device of claim 12, wherein the circuitry is configured to, when the driver's Score (Score D ) Score higher than the passenger (score P ) When a hand owner status is obtained (1710, 1711, 1712) indicating that the hand owner is a driver.
14. The electronic device of claim 12, wherein the circuitry is configured to, when the driver's Score (Score D ) Score lower than the passenger (score P ) When a hander status is obtained (1710, 1711, 1712) indicating that the hander is a passenger.
15. According to claim 12Wherein the circuit is configured to, when the driver's Score (Score D ) And the score of the passenger (score P ) When the absolute difference of (c) is greater than a threshold (epsilon), a hand-owner status (1710, 1711, 1712) is obtained indicating that the hand-owner is unknown.
16. The electronic device of claim 1, wherein the circuitry is configured to perform seat occupancy detection based on the depth image to obtain a seat occupancy detection state when the captured image (700) is a depth image.
17. The electronic device of claim 1, wherein the circuitry is configured to perform the hand-owner identification (1706) based on a left-hand drive (LHD) configuration or a right-hand drive (RHD) configuration.
18. A method includes performing a hand-owner identification (1706) based on an image analysis (701) of an image (700) captured by an imaging system (200) to obtain a hand-owner status (1710, 1711, 1712).
19. A computer program comprising instructions which, when executed by a computer, cause the computer to perform the method of claim 18.
CN202280010472.8A 2021-01-25 2022-01-17 Electronic device, method and computer program Pending CN116848562A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP21153255 2021-01-25
EP21153255.1 2021-01-25
PCT/EP2022/050823 WO2022157090A1 (en) 2021-01-25 2022-01-17 Electronic device, method and computer program

Publications (1)

Publication Number Publication Date
CN116848562A true CN116848562A (en) 2023-10-03

Family

ID=74236059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280010472.8A Pending CN116848562A (en) 2021-01-25 2022-01-17 Electronic device, method and computer program

Country Status (3)

Country Link
EP (1) EP4281946A1 (en)
CN (1) CN116848562A (en)
WO (1) WO2022157090A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5916566B2 (en) * 2012-08-29 2016-05-11 アルパイン株式会社 Information system
KR101537936B1 (en) * 2013-11-08 2015-07-21 현대자동차주식회사 Vehicle and control method for the same
EP2891950B1 (en) 2014-01-07 2018-08-15 Sony Depthsensing Solutions Human-to-computer natural three-dimensional hand gesture based navigation method
GB2568508B (en) * 2017-11-17 2020-03-25 Jaguar Land Rover Ltd Vehicle controller
US11662827B2 (en) 2018-01-03 2023-05-30 Sony Semiconductor Solutions Corporation Gesture recognition using a mobile device

Also Published As

Publication number Publication date
EP4281946A1 (en) 2023-11-29
WO2022157090A1 (en) 2022-07-28

Similar Documents

Publication Publication Date Title
JP7136106B2 (en) VEHICLE DRIVING CONTROL DEVICE, VEHICLE DRIVING CONTROL METHOD, AND PROGRAM
EP3128462B1 (en) Driver assistance apparatus and control method for the same
US11458978B2 (en) Drive assist method, drive assist program, and vehicle control device
US20200409387A1 (en) Image processing apparatus, image processing method, and program
WO2019130945A1 (en) Information processing device, information processing method, program, and moving body
US11501461B2 (en) Controller, control method, and program
KR20170048781A (en) Augmented reality providing apparatus for vehicle and control method for the same
US11590985B2 (en) Information processing device, moving body, information processing method, and program
WO2020031812A1 (en) Information processing device, information processing method, information processing program, and moving body
WO2021241189A1 (en) Information processing device, information processing method, and program
CN112534297A (en) Information processing apparatus, information processing method, computer program, information processing system, and mobile apparatus
WO2022138123A1 (en) Available parking space identification device, available parking space identification method, and program
US20220383749A1 (en) Signal processing device, signal processing method, program, and mobile device
US20220277556A1 (en) Information processing device, information processing method, and program
US20230260254A1 (en) Information processing device, information processing method, and program
CN116848562A (en) Electronic device, method and computer program
WO2020129656A1 (en) Information processing device, information processing method, and program
WO2023162497A1 (en) Image-processing device, image-processing method, and image-processing program
WO2024009829A1 (en) Information processing device, information processing method, and vehicle control system
WO2024024471A1 (en) Information processing device, information processing method, and information processing system
WO2022201892A1 (en) Information processing apparatus, information processing method, and program
WO2023054090A1 (en) Recognition processing device, recognition processing method, and recognition processing system
US20230206596A1 (en) Information processing device, information processing method, and program
CN113614777A (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination