CN115035500A - Vehicle door control method and device, electronic equipment and storage medium - Google Patents

Vehicle door control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115035500A
CN115035500A CN202210615384.2A CN202210615384A CN115035500A CN 115035500 A CN115035500 A CN 115035500A CN 202210615384 A CN202210615384 A CN 202210615384A CN 115035500 A CN115035500 A CN 115035500A
Authority
CN
China
Prior art keywords
door
vehicle
depth image
vehicle door
passenger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210615384.2A
Other languages
Chinese (zh)
Inventor
张亚洲
范亦卿
陶莹
许亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202210615384.2A priority Critical patent/CN115035500A/en
Publication of CN115035500A publication Critical patent/CN115035500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • EFIXED CONSTRUCTIONS
    • E05LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
    • E05FDEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION; CHECKS FOR WINGS; WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
    • E05F15/00Power-operated mechanisms for wings
    • E05F15/70Power-operated mechanisms for wings with automatic actuation
    • E05F15/73Power-operated mechanisms for wings with automatic actuation responsive to movement or presence of persons or objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • EFIXED CONSTRUCTIONS
    • E05LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
    • E05YINDEXING SCHEME RELATING TO HINGES OR OTHER SUSPENSION DEVICES FOR DOORS, WINDOWS OR WINGS AND DEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION, CHECKS FOR WINGS AND WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
    • E05Y2900/00Application of doors, windows, wings or fittings thereof
    • E05Y2900/10Application of doors, windows, wings or fittings thereof for buildings or parts thereof
    • E05Y2900/13Application of doors, windows, wings or fittings thereof for buildings or parts thereof characterised by the type of wing
    • E05Y2900/132Doors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Power-Operated Mechanisms For Wings (AREA)

Abstract

The present disclosure relates to a vehicle door control method and apparatus, an electronic device, and a storage medium, the method including: acquiring a depth image of a passenger in a door area acquired by a depth image sensor in the vehicle; determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor from the door frame of the vehicle door; and inhibiting a door closing command received by the vehicle when the relative positional relationship indicates that the passenger is located within a door frame range of the vehicle door. The disclosed embodiment can prevent passengers from being clamped when the vehicle door is closed.

Description

Vehicle door control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a vehicle door control method and apparatus, an electronic device, and a storage medium.
Background
At present, the switch operation of public transport means (such as buses) is usually controlled subjectively by a driver, and due to the limitation of the perception range of the driver and the subjectivity of human judgment, the condition that a passenger closes a vehicle door before getting off the vehicle is not completed can occur, and the personal safety of the passenger getting off the vehicle is threatened.
Disclosure of Invention
The present disclosure provides a vehicle door control technical scheme.
According to an aspect of the present disclosure, there is provided a vehicle door control method including:
acquiring a depth image of a passenger in a door area acquired by a depth image sensor in the vehicle;
determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor from the door frame of the vehicle door;
and inhibiting a door closing command received by the vehicle when the relative positional relationship indicates that the passenger is located within a door frame range of the vehicle door.
In one possible implementation, the determining, based on the depth image and the horizontal distance between the depth image sensor and the door, the relative positional relationship between the passenger and a door frame of the door includes:
determining, based on the depth image, a location of at least one human keypoint of the passenger and a first distance between the at least one human keypoint and the depth sensor;
determining a relative positional relationship of the at least one human body keypoint, a door frame range of a vehicle door, and the depth image sensor based on a position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame range of the vehicle door;
and determining the relative position relationship between the passenger and the door frame of the vehicle door according to the relative position relationship between the at least one human body key point, the door frame of the vehicle door and the depth image sensor.
In one possible implementation, the determining a relative positional relationship of the at least one human body keypoint, the door frame of the vehicle door, and the depth image sensor based on the position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame of the vehicle door includes:
determining an intersection point of a connecting line of the depth image sensor and the at least one key point and a vertical plane where a door frame of the vehicle door is located, and calculating a second distance between the depth sensor and the intersection point;
and determining the relative position relation of the at least one human body key point, the door frame of the vehicle door and the depth image sensor according to the comparison result of the first distance and the second distance.
In one possible implementation, the determining the relative positional relationship of the passenger to the door frame of the vehicle door according to the relative positional relationship of the at least one human body key point, the door frame of the vehicle door, and the depth image sensor includes at least one of:
under the condition that the relative position relations between a plurality of preset human key points and the door frame of the vehicle door and the depth image sensor meet a first condition, determining the range of the passenger leaving the vehicle and far away from the door frame of the vehicle door;
determining that the passenger is located in the range of the door frame of the vehicle door under the condition that the relative position relations between any one of a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a second condition;
under the condition that the relative position relations between a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a third condition, determining the range of the passenger in the vehicle and far away from the door frame of the vehicle door;
wherein the first condition comprises: the first distance is greater than the second distance and a difference between the first distance and the second distance is greater than a first threshold;
the second condition includes: an absolute difference between the first distance and the second distance is less than a first threshold;
the third condition includes: the first distance is less than the second distance and an absolute difference between the first distance and the second distance is greater than a first threshold.
In one possible implementation, the preset human body key points include: vertex keypoints, shoulder keypoints, elbow keypoints, hand keypoints, leg keypoints, and foot keypoints.
In one possible implementation, the method further includes:
detecting passengers in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the depth image of the passenger in the door area acquired by the depth image sensor in the vehicle interior is acquired, and the method comprises the following steps:
and under the condition that the passengers in the door area are detected and the door closing instruction is received, acquiring the depth image of the passengers in the door area, which is acquired by a depth image sensor in the vehicle.
In one possible implementation, the method further includes:
detecting whether passengers with getting-off intentions exist in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the depth image of the passenger in the door area acquired by the depth image sensor in the vehicle interior is acquired, and the method comprises the following steps:
when the situation that passengers with the getting-off intention exist in the door area and the door closing instruction is received is detected, the depth image of the passengers in the door area, which is acquired by the depth image sensor in the vehicle, is acquired.
In one possible implementation, the method further includes:
and controlling the vehicle door to be closed when the relative position relation indicates that the passenger leaves the vehicle and is far away from the vehicle door and receives a vehicle door closing instruction.
In one possible implementation, the depth sensor is disposed on the vehicle ceiling at a position facing the door.
According to an aspect of the present disclosure, there is provided a vehicle door control device including:
the acquisition module is used for acquiring a depth image of a passenger in a vehicle door area, which is acquired by a depth image sensor in the vehicle;
a relationship determination module for determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor and the door frame of the vehicle door;
and the suppression module is used for suppressing a vehicle door closing instruction received by the vehicle under the condition that the relative position relation indicates that the passenger is positioned in the door frame range of the vehicle door.
In one possible implementation, the relationship determining module is configured to determine, based on the depth image, a location of at least one human keypoint of the passenger and a first distance between the at least one human keypoint and the depth sensor; determining a relative positional relationship of the at least one human body keypoint, a door frame of a vehicle door, and the depth image sensor based on a position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame of the vehicle door; and determining the relative position relationship between the passenger and the door frame of the vehicle door according to the relative position relationship between the at least one human body key point, the door frame of the vehicle door and the depth image sensor.
In a possible implementation manner, the relationship determining module is configured to determine an intersection point of a connection line between the depth image sensor and the at least one key point and a vertical plane where a door frame of the vehicle door is located, and calculate a second distance between the depth sensor and the intersection point; and determining the relative position relation of the at least one human body key point, the door frame of the vehicle door and the depth image sensor according to the comparison result of the first distance and the second distance.
In one possible implementation, the relationship determining module is configured to perform at least one of:
under the condition that the relative position relations between a plurality of preset human key points and the door frame of the vehicle door and the depth image sensor meet a first condition, determining the range of the passenger leaving the vehicle and far away from the door frame of the vehicle door;
determining that the passenger is located in the door frame range of the vehicle door under the condition that the relative position relations between any one of the preset human body key points and the door frame of the vehicle door and the depth image sensor meet a second condition;
under the condition that the relative position relations between a plurality of preset human body key points and the door frame of the vehicle door and the relative position relations between a plurality of preset human body key points and the depth image sensor all meet a third condition, determining the range of the passenger in the vehicle and far away from the door frame of the vehicle door;
wherein the first condition comprises: the first distance is greater than the second distance and a difference between the first distance and the second distance is greater than a first threshold;
the second condition includes: an absolute difference between the first distance and the second distance is less than a first threshold;
the third condition includes: the first distance is less than the second distance and an absolute difference between the first distance and the second distance is greater than a first threshold.
In one possible implementation, the preset human body key points include: vertex key points, shoulder key points, elbow key points, hand key points, leg key points, foot key points.
In one possible implementation, the apparatus further includes:
the passenger detection module is used for detecting passengers in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the acquisition module is used for acquiring a depth image of a passenger in a vehicle door area acquired by a depth image sensor in the vehicle when the passenger in the vehicle door area is detected and a vehicle door closing instruction is received.
In one possible implementation, the apparatus further includes:
the getting-off intention detection module is used for detecting whether passengers with getting-off intention exist in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the acquisition module is used for acquiring a depth image of passengers in the vehicle door area, which is acquired by a depth image sensor in the vehicle interior, when the situation that the passengers with the getting-off intention exist in the vehicle door area and the vehicle door closing instruction is received is detected.
In one possible implementation, the apparatus further includes:
and the control module is used for controlling the vehicle door to be closed under the condition that the relative position relation indicates that the passenger leaves the vehicle and is far away from the vehicle door and receives a vehicle door closing instruction.
In one possible implementation, the depth sensor is disposed on the vehicle ceiling at a position facing the door.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a depth image of a passenger in a door area acquired by a depth image sensor inside a vehicle is acquired; determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor from the door frame of the vehicle door; and in the case that the relative positional relationship indicates that the passenger is located within the range of the door frame of the vehicle door, suppressing a door closing instruction received by the vehicle. Therefore, the position of the passenger can be accurately determined based on the detected depth image of the passenger in the door area, the accuracy is high, the relative position relationship between the passenger and the door frame of the door can be determined based on the depth image and the horizontal distance between the depth image sensor and the door frame of the door, the relative position relationship between the passenger in the door area and the door frame of the door can be determined without detecting vehicle body information such as the door, the door frame and the like, and the efficiency in the door control process is improved. And under the condition that the relative position relation indicates that the passenger is positioned in the door frame range of the vehicle door, the vehicle door closing instruction received by the vehicle is inhibited, and the effect of preventing the passenger from being clamped is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a vehicle door control method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a positional relationship between an occupant and a door frame according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a vehicle door control device provided in an embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
At present, when passengers get on or off a public transport means by a station, a driver usually judges whether the passengers exist in a vehicle door range by checking images acquired by a rearview mirror at the side of the vehicle door or a camera arranged at the position of the vehicle door, and then the driver manually controls the opening and closing of the vehicle door, so that the operation mode of opening or closing the vehicle door is determined by the mode watched by the driver, and the passenger has the risk of being clamped by the vehicle door due to strong subjectivity.
In addition, in some ways of installing a detector in a door frame of a vehicle door, since the detector has a blind area, a small part such as a human hand may not be accurately detected, and an accident is often caused due to low sensitivity.
In the embodiment of the disclosure, a depth image of a passenger in a door area acquired by a depth image sensor inside a vehicle is acquired; determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor from the door frame of the vehicle door; and in the case that the relative positional relationship indicates that the passenger is located within the range of the door frame of the vehicle door, suppressing a door closing instruction received by the vehicle. Therefore, the position of the passenger can be accurately determined based on the detected depth image of the passenger in the door area, the accuracy is high, the relative position relationship between the passenger and the door can be determined based on the depth image and the horizontal distance between the depth image sensor and the door, the relative position relationship between the passenger in the door area and the door frame of the door can be determined without detecting vehicle body information such as the door, the door frame and the like, and the efficiency in the door control process is improved. And under the condition that the relative position relation indicates that the passenger is located in the door frame range of the vehicle door, the vehicle door closing instruction received by the vehicle is restrained, and the effect of preventing the passenger from being clamped is achieved.
In one possible implementation, the execution subject of the method may be an intelligent driving control device installed on a vehicle. In one possible implementation, the method may be performed by a terminal device or a server or other processing device. The terminal device may be a vehicle-mounted device, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a wearable device. The vehicle-mounted device may be a vehicle or a field controller in a vehicle cabin, and may also be a device host used for executing a vehicle door control method in an ADAS (Advanced Driving Assistance System), an OMS (Occupant Monitoring System), or a DMS (Driver Monitoring System). In some possible implementations, the door control method may be implemented by a processor invoking computer readable instructions stored in a memory.
For convenience of description, in one or more implementations of the present description, an execution subject of the door control method may be an in-vehicle device in a vehicle, and an embodiment of the method will be described hereinafter by taking the execution subject as the in-vehicle device as an example. It is understood that the method is carried out by the vehicle-mounted device only for illustrative purposes, and is not to be construed as limiting the method.
Fig. 1 shows a flowchart of a vehicle door control method according to an embodiment of the present disclosure, which includes, as shown in fig. 1:
in step S11, a depth image of a door region passenger captured by a depth image sensor inside the vehicle is acquired.
The door region here is a region near a door of the vehicle, for example, a region near the door in the cabin of the vehicle, and a region near the door outside the cabin. The vehicle door can be a boarding door or a disembarking door. The vehicle may be a vehicle, a flying device, a ship, etc., wherein the vehicle may be a public transportation vehicle, such as a bus, a subway, a train, etc., and further wherein the vehicle may be at least one of a private vehicle, a shared automobile, a cyber appointment, a taxi, a van, etc., and the present disclosure is not limited to the specific type of vehicle.
The depth image sensor can collect depth information of an object. In one example, the depth image sensor may be a color image sensor or an infrared image sensor mounted with a Time of Flight (TOF) sensor or a structured light sensor, or the depth image sensor may also be a binocular image sensor, or the like. Wherein TOF sensors measure the distance between nodes using the time of flight of the signal to and from two asynchronous transceivers (or reflected surfaces); the binocular sensor senses the depth information of an object through two cameras; the structured light sensor may be an encoded structured light sensor or a speckle structured light sensor.
The depth image sensor can acquire an image containing depth information in the vehicle door area, and high-precision depth information can be obtained.
In step S12, the relative positional relationship of the passenger and the door frame of the vehicle door is determined based on the depth image and the horizontal distance of the depth image sensor and the door frame of the vehicle door.
The depth image includes depth information of the passenger, and the depth information of the passenger can represent position information of the passenger in a coordinate system, which may be a coordinate system established by using the sensor as a coordinate origin or may be a world coordinate system, which is not limited by the present disclosure.
Optionally, the depth information of each pixel point in the depth image represents a distance between an object represented by the pixel point and the depth image sensor. Further, from the depth image, the distance of the passenger from the depth image sensor can be determined.
The relative position relationship between a passenger and a door frame of a vehicle door is used for representing whether the passenger is in the door frame range, wherein the door frame range can be a space area in the door frame, and because four edges of the door frame are always in the same plane, a plane area in the four edges of the door frame can be regarded as the door frame range. If the passenger is within the door frame, there is a potential for pinching when the door is closed.
The horizontal distance between the depth image sensor and the door frame of the vehicle door can be specifically the distance between the sensor and the plane where the door frame is located, namely the length of a perpendicular line made on the plane by the sensor. Since the door frame is usually perpendicular to the ground, and the position and size of the door frame in the vehicle are fixed, the position of the entire door frame can be determined when the first distance between the sensor and the door frame in the horizontal direction is determined when the door frame is perpendicular to the ground.
The horizontal distance of the depth image sensor from the door frame of the vehicle door may be measured in advance, and may be obtained, for example, directly by measuring the distance between the sensor and the plane in which the door frame is located. Alternatively, when the depth image sensor is mounted, a mounting position parameter of the depth image sensor may be acquired, and the mounting position parameter includes a horizontal distance between a positioning point of the depth image sensor and a door frame of the vehicle door. For more methods of this horizontal distance, no further description is given here.
Obviously, the position information of the passenger can be determined based on the depth image, and the position of the door frame of the vehicle door can be determined based on the horizontal distance between the depth image sensor and the door frame of the vehicle door, so that the relative position relationship between the passenger in the door area and the door frame of the vehicle door can be determined.
In step S13, in a case where the relative positional relationship indicates that the passenger is located within the door frame range of the door, a door closing instruction received by the vehicle is suppressed.
The door closing command may be issued by the driver operating a control element, or, in an unmanned or autonomous driving scenario, the vehicle may automatically issue the door closing command upon arrival at the destination and detection of the passenger alighting. At this time, if it is determined from the relative positional relationship obtained in step S12 that the passenger is located within the door frame range of the door, the door closing command automatically triggered by the driver or the vehicle controller can be suppressed.
The vehicle door control method provided by the embodiment of the disclosure can be executed in a parking scene, for example, after a driver sends a vehicle door closing instruction, when the relative position relationship indicates that a passenger is located in a range of a door frame, it is often indicated that the passenger is caught when the vehicle door is closed, and then the vehicle door closing instruction received by the vehicle is inhibited so as to avoid catching the passenger, so that the safety is high.
In the embodiment of the disclosure, a depth image of a passenger in a door area acquired by a depth image sensor inside a vehicle is acquired; determining the relative position relationship of the passenger and the vehicle door based on the depth image and the horizontal distance between the depth image sensor and the vehicle door; and in the case that the relative positional relationship indicates that the passenger is located within the range of the door frame of the vehicle door, suppressing a door closing instruction received by the vehicle. Therefore, the position of the passenger can be accurately determined based on the detected depth image of the passenger in the door area, the accuracy is high, the relative position relationship between the passenger and the door can be determined based on the depth image and the horizontal distance between the depth image sensor and the door, the relative position relationship between the passenger in the door area and the door frame of the door can be determined without detecting vehicle body information such as the door, the door frame and the like, and the efficiency in the door control process is improved. And under the condition that the relative position relation indicates that the passenger is located in the door frame range of the vehicle door, the vehicle door closing instruction received by the vehicle is restrained, and the effect of preventing the passenger from being clamped is achieved.
In one possible implementation, the determining the relative positional relationship between the passenger and the vehicle door based on the depth image and the horizontal distance between the depth image sensor and the vehicle door includes: determining, based on the depth image, a location of at least one human keypoint of the passenger and a first distance between the at least one human keypoint and the depth sensor; determining a relative positional relationship of the at least one human body keypoint, a door frame of a vehicle door, and the depth image sensor based on a position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame of the vehicle door; and determining the relative position relationship of the passenger and the door frame of the vehicle door according to the relative position relationship of the at least one human key point, the door frame of the vehicle door and the depth image sensor.
The depth image of the passenger in the vehicle door area acquired by the depth image sensor comprises depth information of objects in the vehicle door area, the depth information can represent the distance between the objects and the depth sensor, and then the position of at least one human key point of the passenger and the first distance between the at least one human key point and the depth sensor can be determined by detecting the human key point based on the depth image.
In one possible implementation, determining a relative positional relationship of the at least one human body keypoint, the door frame of the vehicle door, and the depth image sensor based on the position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance between the depth image sensor and the door frame of the vehicle door includes: determining an intersection point of a connecting line of the depth image sensor and the at least one key point and a vertical plane where a door frame of the vehicle door is located, and calculating a second distance between the depth sensor and the intersection point; and determining the relative position relationship among the at least one human body key point, the door frame of the vehicle door and the depth image sensor according to the comparison result of the first distance and the second distance.
It should be noted that the relative position relationship among the human body key point, the door frame of the vehicle door, and the depth image sensor may include the position relationship between any one of the three relative to the other two, and in an exemplary case, in a horizontal direction (a direction parallel to the ground), the human body key point is located on a side of the door frame of the vehicle door away from the depth sensor, in other words, the door frame of the vehicle door is located between the human body key point and the depth sensor in the horizontal direction.
Please refer to fig. 2, which is a schematic diagram of a position relationship between an occupant and a doorframe according to an embodiment of the present disclosure. The connecting line between the depth image sensor A and a certain key point B of a passenger is AB, the intersection point C of the AB and a vertical plane where a door frame of a vehicle door is located is formed, and the second distance between the depth image sensor A and the intersection point is the length of the connecting line AC.
The length b of the AC may be determined based on the included angle and the distance a, and may be specifically determined by formula (1).
Figure BDA0003673192290000081
Wherein the included angle α is an included angle between AB and the horizontal direction, and in a possible implementation manner, the specific determination manner of the included angle is as follows: acquiring image information of a vehicle door area acquired by a depth image sensor; detecting the key points of the human body on the image information, and determining the coordinates of the detected key points in the image information; and determining an included angle corresponding to the detected coordinates of the key points based on the corresponding relation between the coordinates and the included angle in the pre-calibrated image, and taking the included angle as the included angle between the target line segment AB and the horizontal direction.
The depth image sensor may be a sensor having both a depth information acquisition function and an RGB image acquisition function, for example, and may acquire image information of the vehicle door region as well as depth information.
In one possible implementation, the depth sensor is disposed on the vehicle roof at a position facing the door, as shown at point a in fig. 2.
In one example, the sensor may be located at the top of the vehicle cabin near the door, toward the side of the door. The image information may be an RGB image or an infrared image. The video information may include a single frame image, may also include a video stream, or may also include multiple frame images in the video stream, and the present disclosure does not limit the specific type of the video information.
After the image information of the vehicle door area is obtained, human body key point detection can be carried out on the image information, and the presence of passengers in the vehicle door area can be determined under the condition that the human body key point is detected. As an example of the implementation, a plurality of human body key points to be detected may be preset, for example, 17 key points may be set in a human body skeleton, which respectively indicate various parts of a human body, such as a head, a hand, and an elbow, and whether a passenger exists in a vehicle door region is determined by detecting the 17 key points.
As another example of this implementation, the image information may be input into a backbone network, feature extraction may be performed on the image information via the backbone network to obtain a feature map, and then the positions of key points of a human body may be detected based on the feature map. The backbone network may adopt network structures such as ResNet and MobileNet, which is not limited herein.
The sensor may be fixed inside the vehicle cabin, and since the position of the sensor is fixed, different positions in the image correspond to different angles. Then, after the human key point is determined, the included angle between the connecting line between the human key point and the sensor and the horizontal direction may be determined based on the coordinates of the human key point in the image information (e.g., two-dimensional image). Specifically, the corresponding relationship between the coordinates in the image and the included angle, that is, the included angle corresponding to the coordinates in the image, may be calibrated in advance, and then the included angle corresponding to the detected coordinates of the key point is determined based on the corresponding relationship between the coordinates in the image calibrated in advance and the included angle, and is used as the included angle between the target line segment and the horizontal direction, where the target line segment is a connection line connecting the depth image sensor and the key point of the human body.
In addition, based on the coordinates of the key points in the image information, the depth information of the key points of the passengers in the depth information collected by the sensor can be determined. The image information and the depth information acquired by the sensor have a one-to-one correspondence relationship on the coordinates, and the correspondence relationship can also be calibrated in advance. Then, after the coordinates of the key points in the image information collected by the sensor are determined, the depth information corresponding to the coordinates can be determined according to the coordinates, namely the depth information of the key points of the human body.
Therefore, the included angle corresponding to the coordinate of the key point detected in the image information can be determined through the image information of the vehicle door area acquired by the sensor and the corresponding relation between the coordinate in the image calibrated in advance and the included angle, the included angle is used as the included angle between the target line segment and the horizontal direction, and the accuracy of the obtained included angle is high.
After the angle α is determined, since the horizontal distance a of the depth image sensor from the door frame of the vehicle door is also known, it is obvious that the length b of the AC, i.e. the second distance between the depth sensor and the intersection point, can be determined based on equation (1).
Then, according to the comparison result of the first distance and the second distance, the relative position relation of at least one human body key point, the door frame of the vehicle door and the depth image sensor can be determined.
In the embodiment of the disclosure, an intersection point of a connecting line of the depth image sensor and the at least one key point and a vertical plane where a door frame of the vehicle door is located is determined, and a second distance between the depth sensor and the intersection point is calculated; and determining the relative position relation of the at least one human body key point, the door frame of the vehicle door and the depth image sensor according to the comparison result of the first distance and the second distance. From this, the line of degree of depth image sensor and at least one key point and the nodical point of the door frame place perpendicular of door, can accurately calculate degree of depth sensor with the second distance between the nodical point accurately determines at least one human key point, the door frame of door and degree of depth image sensor's relative position relation, can accurately realize that the door prevents pressing from both sides.
In one possible implementation, the determining the relative positional relationship of the passenger and the door frame of the vehicle door according to the relative positional relationship of the at least one human body key point, the door frame of the vehicle door, and the depth image sensor includes at least one of: under the condition that the relative position relations between a plurality of preset human body key points and the door frame circumference of the vehicle door and the relative position relations between the preset human body key points and the depth image sensor meet a first condition, determining the range of the passenger leaving the vehicle and the door frame far away from the vehicle door; determining that the passenger is located in the range of the door frame of the vehicle door under the condition that the relative position relations between any one of a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a second condition; and under the condition that the relative position relations between the preset human body key points and the door frame of the vehicle door and between the preset human body key points and the depth image sensor all meet a third condition, determining the range of the passenger in the vehicle and far away from the door frame of the vehicle door.
Wherein the first condition comprises: the first distance is greater than the second distance and a difference between the first distance and the second distance is greater than a first threshold; the second condition includes: an absolute difference between the first distance and the second distance is less than a first threshold; the third condition includes: the first distance is less than the second distance and an absolute difference between the first distance and the second distance is greater than a first threshold.
The relative positional relationship of the passenger to the door frame of the vehicle door may include at least one of: a passenger exiting the vehicle and away from a door frame extent of a door; the passenger is positioned in the range of the door frame of the vehicle door; the passenger is within the vehicle and away from the door frame extent of the door.
Specifically, under the condition that the relative position relations between a plurality of preset human key points and the door frame of the vehicle door and the depth image sensor all meet a first condition, the range of the passenger leaving the vehicle and the door frame of the vehicle door is determined. The first condition includes: the first distance is greater than the second distance and a difference between the first distance and the second distance is greater than a first threshold. The first threshold may be a preset empirical threshold.
And under the condition that the relative position relations between any preset human body key point in the human body key points and the door frame of the vehicle door and between the preset human body key points and the depth image sensor meet a second condition, determining that the passenger is located in the door frame range of the vehicle door. The second condition includes: an absolute difference between the first distance and the second distance is less than a first threshold.
And under the condition that the relative position relations between the preset human body key points and the door frame of the vehicle door and between the preset human body key points and the depth image sensor all meet a third condition, determining the range of the passenger in the vehicle and far away from the door frame of the vehicle door. The third condition includes: the first distance is less than the second distance and an absolute difference between the first distance and the second distance is greater than a first threshold.
In the embodiment of the disclosure, the relative position relationship between the passenger and the door frame of the vehicle door can be accurately determined according to the relative position relationship between at least one human body key point, the door frame of the vehicle door and the depth image sensor, so that the vehicle door can be accurately prevented from being clamped based on the relative position relationship.
In one possible implementation, the preset human body key points include: vertex key points, shoulder key points, elbow key points, hand key points, leg key points, foot key points.
In addition, the human body key points may be other key points of the human body skeleton, respectively indicating various parts of the human body, such as the head, the hands, the elbows, the knees and the like.
In the embodiment of the disclosure, the detection of the passenger is realized by monitoring the human body key points, so that the relative position relationship between the passenger and the vehicle door is accurately determined, and the vehicle door is prevented from being clamped accurately based on the relative position relationship.
In one possible implementation, the method further includes: detecting passengers in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor; the depth image of the passenger in the door area acquired by the depth image sensor in the vehicle interior is acquired, and the method comprises the following steps: and under the condition that the passengers in the door area are detected and the door closing instruction is received, acquiring the depth image of the passengers in the door area, which is acquired by a depth image sensor in the vehicle.
Whether passengers exist in the vehicle door area can be detected through an image target detection technology, specifically, human body key point detection can be carried out on image information, and the passengers in the vehicle door area can be determined under the condition that the human body key points are detected. As an example of the implementation, a plurality of human body key points to be detected may be preset, for example, 17 key points may be set in a human body skeleton, which respectively indicate various parts of a human body, such as a head, a hand, and an elbow, and whether a passenger exists in a vehicle door region is determined by detecting the 17 key points.
As another example of this implementation, the image information may be input into a backbone network, feature extraction may be performed on the image information via the backbone network to obtain a feature map, and then the positions of key points of a human body may be detected based on the feature map, thereby detecting passengers in a door region. The backbone network may adopt network structures such as ResNet, MobileNet, and the like, which is not limited herein.
The execution scene of the embodiment of the disclosure can be executed when a door closing instruction triggered manually is received, so that after a passenger is detected to exist in a vehicle door area, and under the condition that the vehicle door closing instruction is received, a depth image of the passenger in the vehicle door area, which is acquired by a depth image sensor in the vehicle, can be acquired, and then the vehicle door control method provided by the disclosure is executed, so that the situation that the passenger is clamped due to inaccurate observation of a driver is avoided.
In one possible implementation, the method further includes: detecting whether passengers with getting-off intention exist in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor; the method for acquiring the depth image of the passenger in the vehicle door area acquired by the depth image sensor in the vehicle interior comprises the following steps: when the situation that passengers with getting-off intention exist in the vehicle door area and a vehicle door closing instruction is received is detected, the depth image of the passengers in the vehicle door area, which is acquired by a depth image sensor in the vehicle, is acquired.
The action of the passenger approaching the vehicle door can be identified based on the video stream of the vehicle door area collected by the depth image sensor, whether the passenger has the getting-off intention or not can be identified, and the passenger with the getting-off intention can be determined under the condition that the action of the passenger approaching the vehicle door is identified.
Or in a bus scene of card swiping for getting off, the card swiping action of the passenger can be identified through the video stream to identify whether the passenger has the getting off intention, and the passenger with the getting off intention can be determined under the condition that the card swiping action of the passenger is identified.
The specific implementation manner for identifying the action of the passenger approaching the vehicle door and the action of swiping the card by the passenger can be implemented based on network structures such as ResNet and MobileNet, which is not described in detail in this disclosure.
The execution scene of the embodiment of the disclosure can be executed when a door closing instruction is received, then, passengers with getting-off intention exist in a door region, and under the condition of receiving the door closing instruction, the depth image of the passengers in the door region acquired by the depth image sensor in the vehicle is acquired, namely the depth image of the passengers in the door region acquired by the depth image sensor in the vehicle can be acquired, and then the door control method provided by the disclosure is executed, so that the situation that the passengers are clamped by the driver due to inaccurate observation is avoided.
In one possible implementation, the method further includes: and controlling the vehicle door to be closed when the relative position relation indicates that the passenger leaves the vehicle and is far away from the vehicle door and receives a vehicle door closing instruction.
When the relative positional relationship indicates that the passenger leaves the vehicle and is far away from the door and receives the door closing instruction, it indicates that there is no risk of the passenger being caught by the door, and the instruction to close the door can be executed.
The door closing command may be issued by the driver. In the unmanned driving scenario, the door closing command may be issued by a vehicle controller. The car door closing instruction sent by the driver can be a car door closing instruction triggered by the operation of the driver and the like. In one or more embodiments provided by the present disclosure, it may be that in a case where a door closing instruction issued by a driver is received, whether a passenger in a door area is located within a door frame is detected, and an instruction whether to suppress the door closing is determined. That is, the manually triggered door closing instruction is not directly issued to the door control unit controlling the closing of the door, but is first sent to the execution main body of the method, so as to further determine whether to send the door closing instruction to the door control unit controlling the closing of the door.
Therefore, although a driver automatically triggers a closing door instruction in a manual mode or after a vehicle controller arrives at a station or a destination, whether the closing door instruction is issued to a vehicle door control unit for controlling the closing of a vehicle door still needs to be detected through the method, whether a passenger is located in a door frame range is detected, and the closing door instruction received by the vehicle is restrained under the condition that the relative position relationship indicates that the passenger is located in the door frame range of the vehicle door, so that the safety of the passenger is improved, and meanwhile, voice prompt can be given to remind the passenger of closing the vehicle door and paying attention to the safety.
An application scenario of the embodiment of the present disclosure is explained below. Referring to fig. 2, the application scene is a bus departure scene, in the application scene, a TOF camera located at a bus departure door of the bus collects image information and depth information of a departure door area, and human key points in the door area are detected through the image information; under the condition that the key points of the human body are detected, the included angle alpha of the connecting line of the camera and the key points relative to the horizontal direction is obtained through depth information; calculating a distance b by using an included angle alpha between a key point and a camera connecting line and the horizontal direction and a distance a between the camera and a door frame of the vehicle door in the horizontal direction through a formula (1); comparing the linear distances AB and b from the key point acquired by the TOF camera to the camera, and if AB is greater than b, indicating that the key point is far away from the range of the door frame; and judging whether all key points of the same passenger are separated from the door frame according to the logic, and if any key point is not separated from the door frame range, temporarily inhibiting a driver door closing command received by the door control unit until all key points are separated from the door frame range and then performing door closing action.
In the application scene, after a bus driver observes that a passenger getting off a bus finishes getting off, the bus door is operated to be closed, the camera starts to sense and judge whether all people leave the range of the bus door, if the sensing range detects that the key points of the human body do not leave the range of the bus door, the bus door is not closed for the moment until all the key points are identified to leave the range of the bus door, and then the bus door is closed.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a vehicle door control device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the vehicle door control methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 3 shows a block diagram of a vehicle door control apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 3:
the acquiring module 31 is used for acquiring a depth image of a passenger in a door area acquired by a depth image sensor in the vehicle;
a relationship determination module 32 for determining a relative positional relationship of the passenger and a door frame of the vehicle door based on the depth image and a horizontal distance between the depth image sensor and the door frame of the vehicle door;
a suppressing module 33, configured to suppress a door closing instruction received by the vehicle if the relative positional relationship indicates that the passenger is located in a door frame of the vehicle door.
In one possible implementation, the relationship determination module is configured to determine, based on the depth image, a position of at least one human key point of the passenger and a first distance between the at least one human key point and the depth sensor; determining a relative positional relationship of the at least one human body keypoint, a door frame of a vehicle door, and the depth image sensor based on a position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame of the vehicle door; and determining the relative position relationship between the passenger and the door frame of the vehicle door according to the relative position relationship between the at least one human body key point, the door frame of the vehicle door and the depth image sensor.
In a possible implementation manner, the relationship determining module is configured to determine an intersection point of a connection line between the depth image sensor and the at least one key point and a vertical plane where a door frame of the vehicle door is located, and calculate a second distance between the depth sensor and the intersection point; and determining the relative position relation of the at least one human body key point, the door frame of the vehicle door and the depth image sensor according to the comparison result of the first distance and the second distance.
In one possible implementation, the relationship determining module is configured to perform at least one of:
under the condition that the relative position relations between a plurality of preset human key points and the door frame of the vehicle door and the depth image sensor meet a first condition, determining the range of the passenger leaving the vehicle and far away from the door frame of the vehicle door;
determining that the passenger is located in the range of the door frame of the vehicle door under the condition that the relative position relations between any one of a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a second condition;
under the condition that the relative position relations between a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a third condition, determining the range of the passenger in the vehicle and far away from the door frame of the vehicle door;
wherein the first condition comprises: the first distance is greater than the second distance and a difference between the first distance and the second distance is greater than a first threshold;
the second condition includes: an absolute difference between the first distance and the second distance is less than a first threshold;
the third condition includes: the first distance is less than the second distance and an absolute difference between the first distance and the second distance is greater than a first threshold.
In one possible implementation, the preset human body key points include: vertex key points, shoulder key points, elbow key points, hand key points, leg key points, foot key points.
In one possible implementation, the apparatus further includes:
the passenger detection module is used for detecting passengers in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the acquisition module is used for acquiring the depth image of the passenger in the vehicle door area acquired by the depth image sensor in the vehicle interior under the conditions that the passenger in the vehicle door area is detected and the vehicle door closing instruction is received.
In one possible implementation, the apparatus further includes:
the getting-off intention detection module is used for detecting whether passengers with getting-off intention exist in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the acquisition module is used for acquiring the depth image of the passengers in the vehicle door area acquired by the depth image sensor in the vehicle interior under the conditions that the passengers with the intention of getting off the vehicle exist in the vehicle door area and the vehicle door closing instruction is received.
In one possible implementation, the apparatus further includes:
and the control module is used for controlling the vehicle door to be closed under the condition that the relative position relation indicates that the passenger leaves the vehicle and is far away from the vehicle door and receives a vehicle door closing instruction.
In one possible implementation, the depth sensor is disposed on the vehicle ceiling at a position facing the door.
The method has specific technical relevance with the internal structure of the computer system, and can solve the technical problem of how to improve the hardware operation efficiency or the execution effect (including reducing data storage capacity, reducing data transmission capacity, improving hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system according with the natural law.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
Embodiments of the present disclosure also provide a computer program product, which includes computer readable code or a non-volatile computer readable storage medium carrying computer readable code, when the computer readable code runs in a processor of an electronic device, the processor in the electronic device executes the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or other terminal device.
Referring to fig. 4, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G), a long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The disclosure relates to the field of augmented reality, and in particular relates to a method for detecting or identifying relevant features, states and attributes of a target object by acquiring image information of the target object in a real environment and by means of various visual correlation algorithms, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Fig. 5 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server or terminal device. Referring to fig. 5, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
Electronic device 1900 may alsoTo include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as a Microsoft Server operating system (Windows Server), stored in the memory 1932 TM ) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X) TM ) Multi-user, multi-process computer operating system (Unix) TM ) Free and open native code Unix-like operating System (Linux) TM ) Open native code Unix-like operating System (FreeBSD) TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as a memory 1932, is also provided that includes computer program instructions executable by a processing component 1922 of an electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
The foregoing description of the various embodiments is intended to highlight different aspects of the various embodiments that are the same or similar, which can be referenced with one another and therefore are not repeated herein for brevity.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A vehicle door control method, characterized by comprising:
acquiring a depth image of a passenger in a door area acquired by a depth image sensor in the vehicle;
determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor and the door frame of the vehicle door;
and inhibiting a door closing command received by the vehicle when the relative positional relationship indicates that the passenger is located within a door frame range of the vehicle door.
2. The method of claim 1, wherein determining the relative positional relationship of the passenger to a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor from the vehicle door comprises:
determining, based on the depth image, a location of at least one human keypoint of the passenger and a first distance between the at least one human keypoint and the depth sensor;
determining a relative positional relationship of the at least one human body keypoint, a door frame of a vehicle door, and the depth image sensor based on a position of the at least one human body keypoint, a first distance between the at least one human body keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame of the vehicle door;
and determining the relative position relationship between the passenger and the door frame of the vehicle door according to the relative position relationship between the at least one human body key point, the door frame of the vehicle door and the depth image sensor.
3. The method of claim 2, wherein determining the relative positional relationship of the at least one human keypoint, the door frame of the vehicle door, and the depth image sensor based on the position of the at least one human keypoint, a first distance between the at least one human keypoint and the depth sensor, and a horizontal distance of the depth image sensor from the door frame of the vehicle door comprises:
determining an intersection point of a connecting line of the depth image sensor and the at least one key point and a vertical plane where a door frame of the vehicle door is located, and calculating a second distance between the depth sensor and the intersection point;
and determining the relative position relation of the at least one human body key point, the door frame of the vehicle door and the depth image sensor according to the comparison result of the first distance and the second distance.
4. The method of claim 3, wherein determining the relative positional relationship of the passenger to the door frame of the vehicle door as a function of the relative positional relationship of the at least one human keypoint, the door frame of the vehicle door, and the depth image sensor comprises at least one of:
under the condition that the relative position relations between a plurality of preset human key points and the door frame of the vehicle door and the depth image sensor meet a first condition, determining the range of the passenger leaving the vehicle and far away from the door frame of the vehicle door;
determining that the passenger is located in the range of the door frame of the vehicle door under the condition that the relative position relations between any one of a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a second condition;
under the condition that the relative position relations between a plurality of preset human body key points and the door frame of the vehicle door and the depth image sensor meet a third condition, determining the range of the passenger in the vehicle and far away from the door frame of the vehicle door;
wherein the first condition comprises: the first distance is greater than the second distance and a difference between the first distance and the second distance is greater than a first threshold;
the second condition includes: an absolute difference between the first distance and the second distance is less than a first threshold;
the third condition includes: the first distance is less than the second distance and an absolute difference between the first distance and the second distance is greater than a first threshold.
5. The method of claim 4, wherein the plurality of preset human keypoints comprises: vertex key points, shoulder key points, elbow key points, hand key points, leg key points, foot key points.
6. The method according to any one of claims 1-5, further comprising:
detecting passengers in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the depth image of the passenger in the door area acquired by the depth image sensor in the vehicle interior is acquired, and the method comprises the following steps:
and under the condition that the passengers in the door area are detected and the door closing instruction is received, acquiring the depth image of the passengers in the door area, which is acquired by a depth image sensor in the vehicle.
7. The method according to any one of claims 1-5, further comprising:
detecting whether passengers with getting-off intention exist in the vehicle door area according to the image information of the vehicle door area acquired by the depth image sensor;
the depth image of the passenger in the door area acquired by the depth image sensor in the vehicle interior is acquired, and the method comprises the following steps:
when the situation that passengers with getting-off intention exist in the vehicle door area and a vehicle door closing instruction is received is detected, the depth image of the passengers in the vehicle door area, which is acquired by a depth image sensor in the vehicle, is acquired.
8. The method according to any one of claims 1 to 7, further comprising:
and controlling the vehicle door to be closed when the relative position relation indicates that the passenger leaves the vehicle and is far away from the vehicle door and receives a vehicle door closing instruction.
9. The method of claims 1-8, wherein the depth sensor is disposed on the vehicle roof toward the vehicle door.
10. A vehicle door control device, characterized by comprising:
the acquisition module is used for acquiring a depth image of a passenger in a vehicle door area, which is acquired by a depth image sensor in the vehicle;
a relationship determination module for determining a relative positional relationship of the passenger and a door frame of a vehicle door based on the depth image and a horizontal distance of the depth image sensor and the door frame of the vehicle door;
and the suppression module is used for suppressing a vehicle door closing instruction received by the vehicle under the condition that the relative position relation indicates that the passenger is positioned in the door frame range of the vehicle door.
11. An electronic device, comprising:
a processor; a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 9.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
CN202210615384.2A 2022-05-31 2022-05-31 Vehicle door control method and device, electronic equipment and storage medium Pending CN115035500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210615384.2A CN115035500A (en) 2022-05-31 2022-05-31 Vehicle door control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210615384.2A CN115035500A (en) 2022-05-31 2022-05-31 Vehicle door control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115035500A true CN115035500A (en) 2022-09-09

Family

ID=83122459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210615384.2A Pending CN115035500A (en) 2022-05-31 2022-05-31 Vehicle door control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115035500A (en)

Similar Documents

Publication Publication Date Title
CN112096222B (en) Trunk control method and device, vehicle, electronic device and storage medium
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
US20210158560A1 (en) Method and device for obtaining localization information and storage medium
KR101575159B1 (en) Method of operating application for providing parking information to mobile terminal
CN112001348A (en) Method and device for detecting passenger in vehicle cabin, electronic device and storage medium
KR20120140486A (en) Apparatus and method for providing guiding service in portable terminal
CN113763670A (en) Alarm method and device, electronic equipment and storage medium
CN113486760A (en) Object speaking detection method and device, electronic equipment and storage medium
CN112036303A (en) Method and device for reminding left-over article, electronic equipment and storage medium
CN113920492A (en) Method and device for detecting people in vehicle, electronic equipment and storage medium
CN113486759B (en) Dangerous action recognition method and device, electronic equipment and storage medium
WO2022183663A1 (en) Event detection method and apparatus, and electronic device, storage medium and program product
CN114407630A (en) Vehicle door control method and device, electronic equipment and storage medium
CN112667084B (en) Control method and device for vehicle-mounted display screen, electronic equipment and storage medium
CN111435422A (en) Motion recognition method, control method and device, electronic device and storage medium
CN113807167A (en) Vehicle collision detection method and device, electronic device and storage medium
CN113060144A (en) Distraction reminding method and device, electronic equipment and storage medium
CN112927378A (en) Payment management method and device for parking lot, electronic equipment and storage medium
CN109189068B (en) Parking control method and device and storage medium
CN115035500A (en) Vehicle door control method and device, electronic equipment and storage medium
WO2023029407A1 (en) Method and apparatus for vehicle to send information to emergency call center
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN114495074A (en) Control method and device of vehicle, electronic equipment and storage medium
CN114495072A (en) Occupant state detection method and apparatus, electronic device, and storage medium
CN113911054A (en) Vehicle personalized configuration method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination