WO2021000587A1 - Vehicle door unlocking method and device, system, vehicle, electronic equipment and storage medium - Google Patents
Vehicle door unlocking method and device, system, vehicle, electronic equipment and storage medium Download PDFInfo
- Publication number
- WO2021000587A1 WO2021000587A1 PCT/CN2020/076713 CN2020076713W WO2021000587A1 WO 2021000587 A1 WO2021000587 A1 WO 2021000587A1 CN 2020076713 W CN2020076713 W CN 2020076713W WO 2021000587 A1 WO2021000587 A1 WO 2021000587A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- depth
- target object
- pixel
- depth map
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00563—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00309—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated with bidirectional data transmission between data carrier and locks
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00896—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys specially adapted for particular uses
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C2009/00753—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated by active electrical keys
- G07C2009/00769—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys operated by active electrical keys with data transmission performed by wireless means
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C2209/00—Indexing scheme relating to groups G07C9/00 - G07C9/38
- G07C2209/60—Indexing scheme relating to groups G07C9/00174 - G07C9/00944
- G07C2209/63—Comprising locating means for detecting the position of the data carrier, i.e. within the vehicle or within a certain distance from the vehicle
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- the present disclosure relates to the field of vehicle technology, and in particular to a method and device for unlocking a vehicle door, a system, a vehicle, an electronic device, and a storage medium.
- Brushing the face to open the door is a new technology for smart vehicles.
- the camera needs to be kept open; in order to be able to determine whether a person approaching the vehicle is the owner of the car in time, the image collected by the camera needs to be processed in real time to quickly identify the owner to quickly open the door.
- this method has high operating power consumption, and long-term high-power operation may cause the vehicle to fail to start due to insufficient power, which will affect the normal use of the vehicle and the user experience.
- the present disclosure proposes a technical solution for unlocking a vehicle door.
- a method for unlocking a vehicle door including:
- a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- a method for unlocking a vehicle door including:
- a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- a vehicle door unlocking device including:
- the search module is used to search for the Bluetooth device with the preset identification via the Bluetooth module installed in the car;
- the wake-up module is used to establish a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification in response to searching for the Bluetooth device with the preset identification, and to wake up and control in response to the successful Bluetooth pairing connection
- the image acquisition module provided in the vehicle collects the first image of the target object, or, in response to searching for the Bluetooth device with the preset identification, wakes up and controls the image acquisition module provided in the vehicle to acquire the target object First image
- a face recognition module configured to perform face recognition based on the first image
- the unlocking module is used for sending a door unlocking instruction and/or opening a door instruction to at least one door of the vehicle in response to successful face recognition.
- a vehicle-mounted face unlocking system including: a memory, a face recognition module, an image acquisition module, and a Bluetooth module; the face recognition module is connected to the memory and the Bluetooth module, respectively.
- the image acquisition module is connected to the Bluetooth module;
- the Bluetooth module includes a device for waking up the face recognition module when the Bluetooth pairing connection with the Bluetooth device with the preset identification succeeds or the Bluetooth device with the preset identification is searched
- the face recognition module is also provided with a communication interface for connecting with the door domain controller, and if the face recognition is successful, the The door domain controller sends control information for unlocking the door.
- a vehicle including the vehicle-mounted face unlocking system, and the vehicle-mounted face unlocking system is connected to a door domain controller of the vehicle.
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the method of the first aspect described above.
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the method of the second aspect described above.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method of the first aspect described above is implemented.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the method of the second aspect described above is implemented.
- a computer program including computer readable code, and when the computer readable code is executed in an electronic device, a processor in the electronic device executes to implement the above method.
- the Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification is established in response to searching for the Bluetooth device with the preset identification, and in response to the successful Bluetooth pairing and connection, the face recognition module is awakened and controlled
- the image acquisition module collects the first image of the target object, and thus based on the successful Bluetooth pairing connection and then wakes up the face recognition module, it can effectively reduce the probability of falsely waking up the face recognition module, thereby improving user experience and effectively reducing The power consumption of the face recognition module.
- the Bluetooth-based pairing connection method compared with short-range sensor technologies such as ultrasonic and infrared, the Bluetooth-based pairing connection method has the advantages of high security and support for larger distances.
- the embodiments of the present disclosure provide a solution that can better weigh the face recognition module's power saving, user experience, and security by successfully waking up the face recognition module based on the Bluetooth pairing connection.
- Fig. 1 shows a flowchart of a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- Figure 2 shows a schematic diagram of the B-pillar of the car.
- FIG. 3 shows a schematic diagram of the installation height and the recognizable height range of the vehicle door unlocking device in the vehicle door unlocking method according to an embodiment of the present disclosure.
- Fig. 4a shows a schematic diagram of an image sensor and a depth sensor in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- Fig. 4b shows another schematic diagram of an image sensor and a depth sensor in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- FIG. 5 shows a schematic diagram of an example of a living body detection method according to an embodiment of the present disclosure.
- FIG. 6 shows a schematic diagram of an example of determining the result of the living body detection of the target object in the first image based on the first image and the second depth map in the living body detection method according to an embodiment of the present disclosure.
- Fig. 7 shows a schematic diagram of a depth prediction neural network in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- FIG. 8 shows a schematic diagram of a correlation detection neural network in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- Fig. 9 shows an exemplary schematic diagram of updating the depth map in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- FIG. 10 shows a schematic diagram of surrounding pixels in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- FIG. 11 shows another schematic diagram of surrounding pixels in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- FIG. 12 shows another flowchart of a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- FIG. 13 shows a block diagram of a vehicle door unlocking device according to an embodiment of the present disclosure.
- Fig. 14 shows a block diagram of a vehicle face unlocking system according to an embodiment of the present disclosure.
- Fig. 15 shows a schematic diagram of a vehicle face unlocking system according to an embodiment of the present disclosure.
- FIG. 16 shows a schematic diagram of a car according to an embodiment of the present disclosure.
- Fig. 17 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- Fig. 1 shows a flowchart of a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the vehicle door unlocking method may be executed by a vehicle door unlocking device.
- the method for unlocking the vehicle door may be executed by an in-vehicle device or other processing device.
- the vehicle door unlocking device may be installed in at least one of the following positions: a B-pillar of a vehicle, at least one vehicle door, and at least one rearview mirror.
- Figure 2 shows a schematic diagram of the B-pillar of the car.
- the door unlocking device can be installed on the B-pillar from 130 cm to 160 cm above the ground, and the horizontal recognition distance of the door unlocking device can be 30 cm to 100 cm, which is not limited here.
- FIG. 3 shows a schematic diagram of the installation height and the recognizable height range of the vehicle door unlocking device in the vehicle door unlocking method according to an embodiment of the present disclosure.
- the installation height of the door unlocking device is 160 cm
- the recognizable height range is 140 cm to 190 cm.
- the method for unlocking the vehicle door may be implemented by a processor calling a computer readable instruction stored in the memory.
- the method for unlocking the vehicle door includes steps S11 to S15.
- step S11 a Bluetooth device with a preset identification is searched through the Bluetooth module installed in the car.
- searching for a Bluetooth device with a preset identifier via a Bluetooth module installed in the car includes: searching for a preset identifier via a Bluetooth module installed in the car when the car is turned off or in a state where the door is locked. Set the identified Bluetooth device.
- Bluetooth devices which can further reduce power consumption.
- the Bluetooth module may be a Bluetooth Low Energy (BLE, Bluetooth Low Energy) module.
- BLE Bluetooth Low Energy
- the Bluetooth module can be in the broadcast mode and broadcast a broadcast data packet to the surroundings at regular intervals (for example, 100 milliseconds).
- the surrounding Bluetooth devices are performing the scan action, if they receive the broadcast data packet broadcast by the Bluetooth module, they will send a scan request to the Bluetooth module.
- the Bluetooth module can respond to the scan request and return the scan to the Bluetooth device that sent the scan request. Response packet.
- a scan request sent by a Bluetooth device with a preset identification is received, it is determined that the Bluetooth device with the preset identification is searched.
- the Bluetooth module can be in the scanning state when the car is turned off or when the car is turned off and the door is locked. If a Bluetooth device with a preset logo is scanned, it is determined that the device with the preset logo is found. Bluetooth device.
- the Bluetooth module and the face recognition module can be integrated in the face recognition system.
- the Bluetooth module can be independent of the face recognition system. That is, the Bluetooth module can be installed outside the face recognition system.
- the embodiment of the present disclosure does not limit the maximum search distance of the Bluetooth module.
- the maximum search distance may be about 30 m.
- the identification of the Bluetooth device may refer to the unique identifier of the Bluetooth device.
- the identification of the Bluetooth device may be the ID, name or address of the Bluetooth device.
- the preset identification may be an identification of a device that is successfully paired with the Bluetooth module of the car based on the Bluetooth secure connection technology.
- the number of Bluetooth devices with preset identification may be one or more.
- the identification of the Bluetooth device is the ID of the Bluetooth device
- one or more Bluetooth IDs with permission to drive the door can be preset.
- the Bluetooth device with preset identification may be the Bluetooth device of the vehicle owner; when the number of Bluetooth devices with preset identification is multiple, the multiple
- the Bluetooth devices with the preset identification may include the Bluetooth devices of the vehicle owner and the Bluetooth devices of the vehicle owner's family, friends, and pre-registered contacts.
- the pre-registered contact person may be a pre-registered courier or property staff.
- the Bluetooth device may be any mobile device with Bluetooth function.
- the Bluetooth device may be a mobile phone, a wearable device, or an electronic key.
- the wearable device may be a smart bracelet or smart glasses.
- step S12 in response to searching for a Bluetooth device with a preset identification, a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification is established.
- a Bluetooth pairing between the Bluetooth module and the Bluetooth device with the preset identification is established connection.
- the Bluetooth module in response to searching for a Bluetooth device with a preset logo, performs identity authentication on the Bluetooth device with the preset logo, and after the identity authentication is passed, the Bluetooth module and the Bluetooth device with the preset logo are established.
- the Bluetooth pairing connection of the device can thereby improve the security of the Bluetooth pairing connection.
- step S13 in response to the successful Bluetooth pairing connection, wake up and control the image acquisition module installed in the car to acquire the first image of the target object.
- waking up and controlling the image acquisition module installed in the car to collect the first image of the target object includes: awakening the face recognition module installed in the car; control by the awakened face recognition module The image acquisition module acquires the first image of the target object.
- a Bluetooth device with a preset identifier if a Bluetooth device with a preset identifier is searched, it can indicate to a large extent that a user (such as a car owner) carrying the Bluetooth device with the preset identifier has entered the search range of the Bluetooth module.
- a user such as a car owner
- the Bluetooth device with the preset logo by responding to the search for the Bluetooth device with the preset logo, establish a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset logo, and in response to the successful Bluetooth pairing connection, wake up the face recognition module and control the image acquisition module Collecting the first image of the target object, based on the successful Bluetooth pairing connection and then waking up the face recognition module, can effectively reduce the probability of falsely waking up the face recognition module, thereby improving the user experience and effectively reducing the face recognition module.
- the Bluetooth-based pairing connection method has the advantages of high security and support for larger distances.
- Practice has shown that the time when a user carrying a Bluetooth device with a preset logo reaches the car through this distance (the distance between the user and the car when the Bluetooth pairing connection is successful), and when the car wakes up, the face recognition module switches from a sleep state to a working state
- the face recognition module can be used to recognize the car door immediately without having to wait for the face recognition module to be awakened after the user arrives at the car door. Improve the efficiency of face recognition and improve user experience.
- the embodiments of the present disclosure provide a solution that can better weigh the face recognition module's power saving, user experience, and security by successfully waking up the face recognition module based on the Bluetooth pairing connection.
- the method further includes: if the face image is not collected within a preset time, controlling the face recognition module to enter a sleep state .
- This implementation method controls the face recognition module to enter a sleep state when no face image is collected within a preset time after the face recognition module is awakened, thereby reducing power consumption.
- the method further includes: if the face recognition fails within a preset time, controlling the face recognition module to enter a sleep state.
- This implementation method controls the face recognition module to enter the sleep state when the face recognition module fails to pass the face recognition within a preset time after waking up the face recognition module, thereby reducing power consumption.
- step S14 face recognition is performed based on the first image.
- face recognition includes: living body detection and face authentication; performing face recognition based on the first image includes: collecting the first image through the image sensor in the image acquisition module, and based on the first image. The image and pre-registered facial features are used for face authentication; the first depth map corresponding to the first image is collected by the depth sensor in the image acquisition module, and the living body detection is performed based on the first image and the first depth map.
- the first image contains the target object.
- the target object may be a human face or at least a part of a human body, which is not limited in the embodiment of the present disclosure.
- the first image may be a static image or a video frame image.
- the first image may be an image selected from a video sequence, where the image may be selected from the video sequence in a variety of ways.
- the first image is an image selected from a video sequence that meets a preset quality condition, and the preset quality condition may include one or any combination of the following: whether the target object is included, whether the target object is located in the image The center area of the target object, whether the target object is completely contained in the image, the proportion of the target object in the image, the state of the target object (such as the angle of the face), the image clarity, the image exposure, etc., the embodiments of the present disclosure This is not limited.
- the living body detection can be performed first and then the face authentication can be performed. For example, if the live body detection result of the target object is that the target object is a living body, the face authentication process is triggered; if the live body detection result of the target object is that the target object is a prosthesis, the face authentication process is not triggered.
- face authentication can be performed first and then live body detection can be performed. For example, if the face authentication is passed, the living body detection process is triggered; if the face authentication is not passed, the living body detection process is not triggered.
- living body detection and face authentication can be performed at the same time.
- the living body detection is used to verify whether the target object is a living body, for example, it can be used to verify whether the target object is a human body.
- Face authentication is used to extract the facial features in the collected images, compare the facial features in the collected images with the pre-registered facial features to determine whether they belong to the same person's facial features, for example, you can determine the collected facial features Whether the facial features in the image belong to the facial features of the vehicle owner.
- the depth sensor means a sensor for collecting depth information.
- the embodiments of the present disclosure do not limit the working principle and working band of the depth sensor.
- the image sensor and the depth sensor of the image acquisition module can be installed separately or together.
- the image sensor and the depth sensor of the image acquisition module can be set separately, the image sensor adopts RGB (Red, red; Green, green; Blue, blue) sensor or infrared sensor, and the depth sensor adopts binocular infrared sensor or TOF (Time of Flight, time of flight) sensor; the image sensor of the image acquisition module and the depth sensor can be set together.
- the image acquisition module adopts RGBD (Red, red; Green, green; Blue, blue; Deep, depth) sensor to realize the image sensor And the function of the depth sensor.
- the image sensor is an RGB sensor. If the image sensor is an RGB sensor, the image collected by the image sensor is an RGB image.
- the image sensor is an infrared sensor. If the image sensor is an infrared sensor, the image collected by the image sensor is an infrared image. Among them, the infrared image may be an infrared image with a light spot, or an infrared image without a light spot.
- the image sensor may be other types of sensors, which is not limited in the embodiment of the present disclosure.
- the vehicle door unlocking device may obtain the first image in multiple ways.
- the vehicle door unlocking device is provided with a camera, and the vehicle door unlocking device uses the camera to collect static images or video streams to obtain the first image, which is not limited in the embodiment of the present disclosure.
- the depth sensor is a three-dimensional sensor.
- the depth sensor is a binocular infrared sensor, a time-of-flight TOF sensor, or a structured light sensor, where the binocular infrared sensor includes two infrared cameras.
- the structured light sensor may be a coded structured light sensor or a speckle structured light sensor.
- the TOF sensor uses a TOF module based on the infrared band.
- a TOF module based on the infrared band by using a TOF module based on the infrared band, the influence of external light on the depth map shooting can be reduced.
- the first depth map corresponds to the first image.
- the first depth map and the first image are respectively acquired by the depth sensor and the image sensor for the same scene, or the first depth map and the first image are acquired by the depth sensor and the image sensor for the same target area at the same time , But the embodiment of the present disclosure does not limit this.
- Fig. 4a shows a schematic diagram of an image sensor and a depth sensor in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the image sensor is an RGB sensor
- the camera of the image sensor is an RGB camera
- the depth sensor is a binocular infrared sensor.
- the depth sensor includes two infrared (IR) cameras and two infrared binocular infrared sensors.
- the cameras are arranged on both sides of the RGB camera of the image sensor. Among them, two infrared cameras collect depth information based on the principle of binocular parallax.
- the image acquisition module further includes at least one fill light, the at least one fill light is arranged between the infrared camera of the binocular infrared sensor and the camera of the image sensor, and the at least one fill light includes At least one of the fill light for the sensor and the fill light for the depth sensor.
- the fill light used for the image sensor can be a white light
- the fill light used for the image sensor can be an infrared light
- the depth sensor is a binocular Infrared sensor
- the fill light used for the depth sensor can be an infrared light.
- an infrared lamp is provided between the infrared camera of the binocular infrared sensor and the camera of the image sensor.
- the infrared lamp can use 940nm infrared.
- the fill light may be in the normally-on mode. In this example, when the camera of the image acquisition module is in the working state, the fill light is in the on state.
- the fill light can be turned on when the light is insufficient.
- the ambient light intensity can be obtained through the ambient light sensor, and when the ambient light intensity is lower than the light intensity threshold, it is determined that the light is insufficient, and the fill light is turned on.
- Fig. 4b shows another schematic diagram of an image sensor and a depth sensor in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the image sensor is an RGB sensor
- the camera of the image sensor is an RGB camera
- the depth sensor is a TOF sensor.
- the image acquisition module further includes a laser
- the laser is disposed between the camera of the depth sensor and the camera of the image sensor.
- the laser is arranged between the camera of the TOF sensor and the camera of the RGB sensor.
- the laser may be a VCSEL (Vertical Cavity Surface Emitting Laser), and the TOF sensor may collect a depth map based on the laser emitted by the VCSEL.
- the depth sensor is used to collect a depth map
- the image sensor is used to collect a two-dimensional image.
- RGB sensors and infrared sensors are used as examples to describe image sensors
- binocular infrared sensors, TOF sensors, and structured light sensors are used as examples to describe depth sensors, those skilled in the art can understand
- the embodiments of the present disclosure should not be limited to this. Those skilled in the art can select the types of the image sensor and the depth sensor according to actual application requirements, as long as the two-dimensional image and the depth map can be collected respectively.
- step S15 in response to successful face recognition, a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- the vehicle door in the embodiment of the present disclosure may include a vehicle door through which people enter and exit (for example, a left front door, a right front door, a left rear door, and a right rear door), and may also include a trunk door of the vehicle.
- the at least one vehicle door lock may include at least one of a left front door lock, a right front door lock, a left rear door lock, a right rear door lock, and a trunk door lock.
- sending a door unlocking instruction and/or opening a door instruction to at least one door of the vehicle includes: in response to successful face recognition, acquiring at least the vehicle's The state information of a vehicle door; if the state information of the vehicle door is not unlocked, send a door unlock instruction and an open door instruction to the vehicle door; if the state information of the vehicle door is unlocked and not opened, send the vehicle door Send the door open command.
- sending a door unlock instruction and/or a door opening instruction to at least one door of the vehicle includes: in response to successful face recognition, determining that the target object has a door opening permission ; Send a door unlock instruction and/or open a door instruction to at least one door of the vehicle according to the door for which the target object has the authority to open the door.
- the doors for which the target object has the authority to open doors may be all doors, or may be trunk doors.
- the doors for which the owner or his family or friends have the authority to open doors may be all doors, and the doors for which the courier or property staff has the authority to open doors may be the trunk doors.
- the vehicle owner can set the door information for other personnel with the authority to open the door.
- the doors for which passengers have the right to open doors may be non-cockpit doors and trunk doors. If the door of the target object with the authority to open the door is a trunk door, the door unlocking instruction can be sent to the trunk door lock.
- the door of the target object with the permission to open the door only includes the trunk door, it can send the door closing instruction to the trunk door lock after the preset duration of the door unlocking instruction is sent to the trunk door lock, for example, preset The duration can be 3 minutes.
- the door that the courier has the right to open includes only the trunk door, he can send the door unlock instruction to the trunk door lock 3 minutes after sending the door close instruction to the trunk door lock, which can satisfy the courier's backup
- the need for express delivery in the box can improve the safety of the car.
- the time during which the target object has the permission to open the door may also be determined.
- the time when the target object has the right to open the door may be all times, or may be a preset time period.
- the time when the owner or the owner's family member has the authority to open the door may be all the time.
- the owner can set the time for other personnel with the authority to open the door. For example, in an application scenario where a friend of a car owner borrows a car from the car owner, the car owner can set the time for the friend to have the permission to open the door to two days. For another example, after the courier contacts the car owner, the car owner can set the time for the courier to open the door to 13:00-14:00 on September 29, 2019.
- the staff of the car rental agency can set the time for the customer to have the right to open the door to 3 days.
- the time when the passenger has the permission to open the door may be the service period of the travel order.
- the number of door opening permissions corresponding to the target object may be an unlimited number of times or a limited number of times.
- the number of door opening permissions corresponding to the car owner or the car owner's family or friends may be unlimited.
- the number of door opening permissions corresponding to the courier may be a limited number of times, such as 1 time.
- the SoC of the door unlocking device may send a door unlocking instruction to the door domain controller to control the door to unlock.
- performing live detection based on the first image and the first depth map includes: updating the first depth map based on the first image to obtain the second depth map; based on the first image and the second depth map , To determine the live detection result of the target object.
- the depth value of one or more pixels in the first depth map is updated to obtain the second depth map.
- the depth value of the depth failure pixel in the first depth map is updated to obtain the second depth map.
- the depth invalid pixel in the depth map may refer to a pixel with an invalid depth value included in the depth map, that is, a pixel whose depth value is inaccurate or clearly inconsistent with the actual situation.
- the number of depth failure pixels can be one or more. By updating the depth value of at least one depth failure pixel in the depth map, the depth value of the depth failure pixel is more accurate, which helps to improve the accuracy of living body detection.
- the first depth map is a depth map with missing values
- the second depth map is obtained by repairing the first depth map based on the first image, wherein, optionally, repairing the first depth map includes correcting
- the depth value of pixels with missing values is determined or supplemented, but the embodiments of the present disclosure are not limited thereto.
- the first depth map can be updated or repaired in various ways.
- the first image is directly used for living body detection, for example, the first image is directly used to update the first depth map.
- the first image is preprocessed, and the living body detection is performed based on the preprocessed first image.
- the image of the target object is acquired from the first image, and the first depth map is updated based on the image of the target object.
- the image of the target object can be intercepted from the first image in various ways.
- perform target detection on the first image to obtain the location information of the target object, such as the location information of the bounding box of the target object, and intercept the image of the target object from the first image based on the location information of the target object .
- the image of the area where the bounding box of the target object is intercepted from the first image is taken as the image of the target object, another example is to enlarge the bounding box of the target object by a certain multiple and intercept the area where the enlarged bounding box is located from the first image.
- the image is the image of the target object.
- the key point information of the target object in the first image is acquired, and based on the key point information of the target object, the image of the target object is acquired from the first image.
- the key point information of the target object may include position information of multiple key points of the target object.
- the key points of the target object may include one or more of eye key points, eyebrow key points, nose key points, mouth key points, and face contour key points.
- the eye key points may include one or more of eye contour key points, eye corner key points, and pupil key points.
- the contour of the target object is determined based on the key point information of the target object, and the image of the target object is intercepted from the first image according to the contour of the target object.
- the position of the target object obtained through the key point information is more accurate, which is beneficial to improve the accuracy of subsequent living body detection.
- the contour of the target object in the first image can be determined based on the key points of the target object in the first image, and the image of the area where the contour of the target object in the first image is located or the image of the area obtained after a certain magnification Determine the image of the target object.
- the elliptical area determined based on the key points of the target object in the first image may be determined as the image of the target object, or the minimum circumscribed rectangular area of the elliptical area determined based on the key points of the target object in the first image may be determined It is the image of the target object, but the embodiment of the present disclosure does not limit this.
- the interference of the background information in the first image on the living body detection can be reduced.
- the acquired original depth map may be updated, or, in some embodiments, the depth map of the target object is acquired from the first depth map, and the target object’s depth map is updated based on the first image. Depth map to get the second depth map.
- the position information of the target object in the first image is acquired, and based on the position information of the target object, the depth map of the target object is acquired from the first depth map.
- the first depth map and the first image may be registered or aligned in advance, but the embodiment of the present disclosure does not limit this.
- the second depth map is obtained, which can reduce the background information in the first depth map for living body detection The interference produced.
- the first image and the first depth map corresponding to the first image are acquired, the first image and the first depth map are aligned according to the parameters of the image sensor and the parameters of the depth sensor.
- conversion processing may be performed on the first depth map, so that the first depth map after the conversion processing is aligned with the first image.
- the first conversion matrix can be determined according to the parameters of the depth sensor and the parameters of the image sensor, and the first depth map can be converted according to the first conversion matrix.
- at least a part of the converted first depth map may be updated to obtain a second depth map.
- the first depth map after the conversion processing is updated to obtain the second depth map.
- the depth map of the target object intercepted from the first depth map is updated to obtain the second depth map, and so on.
- conversion processing may be performed on the first image, so that the converted first image is aligned with the first depth map.
- the second conversion matrix can be determined according to the parameters of the depth sensor and the parameters of the image sensor, and the first image can be converted according to the second conversion matrix.
- at least a part of the converted first image at least a part of the first depth map may be updated to obtain a second depth map.
- the parameters of the depth sensor may include internal parameters and/or external parameters of the depth sensor
- the parameters of the image sensor may include internal parameters and/or external parameters of the image sensor.
- the first image is an original image (for example, an RGB or infrared image).
- the first image may also refer to an image of a target object intercepted from the original image.
- the first image A depth map may also refer to a depth map of the target object intercepted from the original depth map, which is not limited in the embodiment of the present disclosure.
- FIG. 5 shows a schematic diagram of an example of a living body detection method according to an embodiment of the present disclosure.
- the first image is an RGB image and the target object is a human face.
- the RGB image and the first depth map are aligned and corrected, and the processed image is input into the face key point model for processing , Get the RGB face map (the image of the target object) and the depth face map (the depth map of the target object), and update or repair the depth face map based on the RGB face map.
- Get the RGB face map the image of the target object
- the depth face map the depth map of the target object
- the live detection result of the target object may be that the target object is a living body or the target object is a prosthesis.
- the first image and the second depth map are input to the living body detection neural network for processing, and the living body detection result of the target object in the first image is obtained.
- the first image and the second depth map are processed by other living body detection algorithms to obtain the living body detection result.
- feature extraction is performed on the first image to obtain first feature information; feature extraction is performed on the second depth map to obtain second feature information; based on the first feature information and the second feature information, the first feature information is determined The live detection result of the target object in an image.
- the feature extraction process can be implemented by a neural network or other machine learning algorithms, and the type of feature information extracted can optionally be obtained by learning a sample, which is not limited in the embodiment of the present disclosure.
- the acquired depth map (such as the depth map collected by the depth sensor) may be partially invalid.
- the depth map may randomly cause partial failure of the depth map.
- some special paper quality can make the printed face photos produce a similar effect of large-area failure or partial failure of the depth map.
- the depth map can also be partially invalidated, and the imaging of the prosthesis on the image sensor is normal. Therefore, in the case of partial or complete failure of some depth maps, using the depth map to distinguish between the living body and the prosthesis will cause errors. Therefore, in the embodiments of the present disclosure, by repairing or updating the first depth map, and using the repaired or updated depth map for living body detection, it is beneficial to improve the accuracy of living body detection.
- FIG. 6 shows a schematic diagram of an example of determining the result of the living body detection of the target object in the first image based on the first image and the second depth map in the living body detection method according to an embodiment of the present disclosure.
- the first image and the second depth map are input into the living body detection network for living body detection processing, and the living body detection result is obtained.
- the living body detection network includes two branches, namely a first sub-network and a second sub-network.
- the first sub-network is used to perform feature extraction processing on the first image to obtain first feature information.
- the two sub-networks are used to perform feature extraction processing on the second depth map to obtain second feature information.
- the first sub-network may include a convolutional layer, a downsampling layer, and a fully connected layer.
- the first sub-network may include a first-level convolutional layer, a first-level down-sampling layer, and a first-level fully connected layer.
- the level of convolutional layer may include one or more convolutional layers
- the level of downsampling layer may include one or more downsampling layers
- the level of fully connected layer may include one or more fully connected layers.
- the first sub-network may include a multi-level convolutional layer, a multi-level down-sampling layer, and a first-level fully connected layer.
- each level of convolutional layer may include one or more convolutional layers
- each level of downsampling layer may include one or more downsampling layers
- this level of fully connected layer may include one or more fully connected layers.
- the i-th convolutional layer is cascaded after the i-th down-sampling layer
- the i-th down-sampling layer is cascaded after the i+1-th convolutional layer
- the n-th down-sampling layer is cascaded after the fully connected layer, where , I and n are both positive integers, 1 ⁇ i ⁇ n, n represents the number of convolutional layers and downsampling layers in the depth prediction neural network.
- the first sub-network may include a convolutional layer, a down-sampling layer, a normalization layer, and a fully connected layer.
- the first sub-network may include a first-level convolutional layer, a normalization layer, a first-level down-sampling layer, and a first-level fully connected layer.
- the level of convolutional layer may include one or more convolutional layers
- the level of downsampling layer may include one or more downsampling layers
- the level of fully connected layer may include one or more fully connected layers.
- the first sub-network may include a multi-level convolutional layer, a plurality of normalization layers, a multi-level down-sampling layer, and a first-level fully connected layer.
- each level of convolutional layer may include one or more convolutional layers
- each level of downsampling layer may include one or more downsampling layers
- this level of fully connected layer may include one or more fully connected layers.
- the i-th normalized layer is cascaded after the i-th convolutional layer
- the i-th downsampling layer is cascaded after the i-th normalized layer
- the i+1-th level is cascaded after the i-th down-sampling layer Convolutional layer, cascaded fully connected layer after the nth downsampling layer, where i and n are both positive integers, 1 ⁇ i ⁇ n, and n represents the number of convolutional and downsampling layers in the first sub-network And the number of normalization layers.
- the first image may be subjected to convolution processing and down-sampling processing through a first-level convolution layer and a first-level down-sampling layer.
- the level of convolutional layer may include one or more convolutional layers
- the level of downsampling layer may include one or more downsampling layers.
- the first image may be subjected to convolution processing and down-sampling processing through a multi-level convolution layer and a multi-level down-sampling layer.
- each level of convolutional layer may include one or more convolutional layers
- each level of downsampling layer may include one or more downsampling layers.
- performing down-sampling processing on the first convolution result to obtain the first down-sampling result may include: performing normalization processing on the first convolution result to obtain the first normalization result; and performing the first normalization result Perform down-sampling processing to obtain the first down-sampling result.
- the first down-sampling result may be input to the fully connected layer, and the first down-sampling result may be fused through the fully connected layer to obtain the first characteristic information.
- the second sub-network and the first sub-network have the same network structure, but have different parameters.
- the second sub-network has a different network structure from the first sub-network, which is not limited in the embodiment of the present disclosure.
- the living body detection network also includes a third sub-network for processing the first feature information obtained by the first sub-network and the second feature information obtained by the second sub-network to obtain the target in the first image.
- the result of the live test of the subject may include a fully connected layer and an output layer.
- the output layer adopts the softmax function. If the output of the output layer is 1, it means that the target object is a living body, and if the output of the output layer is 0, it means that the target object is a prosthesis.
- the specific implementation is not limited.
- the first feature information and the second feature information are fused to obtain the third feature information; based on the third feature information, the live detection result of the target object in the first image is determined.
- the first feature information and the second feature information are fused through the fully connected layer to obtain the third feature information.
- the probability that the target object in the first image is a living body is obtained, and the living body detection result of the target object is determined according to the probability that the target object is a living body.
- the target object's living body detection result is that the target object is a living body.
- the probability that the target object is a living body is less than or equal to the second threshold, it is determined that the living body detection result of the target object is a prosthesis.
- the probability that the target object is a prosthesis is obtained, and the live detection result of the target object is determined according to the probability that the target object is the prosthesis. For example, if the probability that the target object is a prosthesis is greater than the third threshold, it is determined that the target object's live body detection result is that the target object is a prosthesis. For another example, if the probability that the target object is a prosthesis is less than or equal to the third threshold, it is determined that the live body detection result of the target object is a live body.
- the third feature information can be input into the Softmax layer, and the probability that the target object is a living body or a prosthesis can be obtained through the Softmax layer.
- the output of the Softmax layer includes two neurons, where one neuron represents the probability that the target object is a living body, and the other neuron represents the probability that the target object is a prosthesis, but the embodiments of the present disclosure are not limited thereto.
- the live detection result of the target object in the first image is determined, so that the depth map can be perfected, thereby improving the accuracy of the live detection.
- updating the first depth map based on the first image to obtain the second depth map includes: determining depth prediction values and associated information of multiple pixels in the first image based on the first image, where , The association information of the plurality of pixels indicates the degree of association between the plurality of pixels; based on the depth prediction value and the association information of the plurality of pixels, the first depth map is updated to obtain the second depth map.
- the depth prediction values of multiple pixels in the first image are determined based on the first image, and the first depth map is repaired and perfected based on the depth prediction values of the multiple pixels.
- the depth prediction values of multiple pixels in the first image are obtained.
- the first image is input into the depth prediction depth network for processing to obtain the depth prediction results of multiple pixels, for example, the depth prediction map corresponding to the first image is obtained, but the embodiment of the present disclosure does not limit this.
- the depth prediction values of multiple pixels in the first image are determined.
- the first image and the first depth map are input to the depth prediction neural network for processing to obtain depth prediction values of multiple pixels in the first image.
- the first image and the first depth map are processed in other ways to obtain depth prediction values of multiple pixels, which is not limited in the embodiment of the present disclosure.
- Fig. 7 shows a schematic diagram of a depth prediction neural network in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the first image and the first depth map can be input to the depth prediction neural network for processing to obtain an initial depth estimation map.
- the depth prediction values of multiple pixels in the first image can be determined.
- the pixel value of the initial depth estimation map is the depth prediction value of the corresponding pixel in the first image.
- the deep prediction neural network can be realized through a variety of network structures.
- the depth prediction neural network includes an encoding part and a decoding part.
- the encoding part may include a convolutional layer and a downsampling layer
- the decoding part may include a deconvolutional layer and/or an upsampling layer.
- the encoding part and/or the decoding part may further include a normalization layer, and the embodiment of the present disclosure does not limit the specific implementation of the encoding part and the decoding part.
- the resolution of the feature map gradually decreases, and the number of feature maps gradually increases, so that rich semantic features and image spatial features can be obtained; in the decoding part, the resolution of the feature map gradually increases Large, the resolution of the feature map finally output by the decoding part is the same as the resolution of the first depth map.
- fusion processing is performed on the first image and the first depth map to obtain a fusion result, and based on the fusion result, the depth prediction values of multiple pixels in the first image are determined.
- the first image and the first depth map can be concat to obtain the fusion result.
- convolution processing is performed on the fusion result to obtain the second convolution result; down-sampling processing is performed based on the second convolution result to obtain the first encoding result; based on the first encoding result, multiple images in the first image are determined The predicted depth value of the pixel.
- convolution processing may be performed on the fusion result through the convolution layer to obtain the second convolution result.
- normalization processing is performed on the second convolution result to obtain the second normalization result; down sampling processing is performed on the second normalization result to obtain the first encoding result.
- the second normalized result can be normalized by the normalization layer to obtain the second normalized result; the second normalized result can be down-sampled by the down-sampling layer to obtain the first encoding result .
- the second convolution result may be down-sampled through the down-sampling layer to obtain the first encoding result.
- the first encoding result can be deconvolved by the deconvolution layer to obtain the first deconvolution result; the first deconvolution result can be normalized by the normalization layer to obtain the depth prediction value .
- the first encoding result may be deconvolved through the deconvolution layer to obtain the depth prediction value.
- the up-sampling process may be performed on the first encoding result through the up-sampling layer to obtain the first up-sampling result; the first up-sampling result may be normalized through the normalization layer to obtain the depth prediction value.
- the upsampling process may be performed on the first encoding result through the upsampling layer to obtain the depth prediction value.
- the association information of the plurality of pixels in the first image may include the degree of association between each pixel in the plurality of pixels of the first image and its surrounding pixels.
- the surrounding pixels of the pixel may include at least one adjacent pixel of the pixel, or include a plurality of pixels that are separated from the pixel by no more than a certain value.
- the surrounding pixels of pixel 5 include pixels 1, pixel 2, pixel 3, pixel 4, pixel 6, pixel 7, pixel 8, and pixel 9 adjacent to it. Accordingly, there are more pixels in the first image.
- the associated information of each pixel includes pixel 1, pixel 2, pixel 3, pixel 4, pixel 6, pixel 7, pixel 8, and the degree of association between pixel 9 and pixel 5.
- the degree of association between the first pixel and the second pixel may be measured by the correlation between the first pixel and the second pixel.
- the embodiments of the present disclosure may use related technologies to determine the correlation between pixels. This will not be repeated here.
- the associated information of multiple pixels can be determined in various ways.
- the first image is input to the correlation detection neural network for processing to obtain correlation information of multiple pixels in the first image.
- the associated feature map corresponding to the first image is obtained.
- other algorithms may be used to obtain the associated information of multiple pixels, which is not limited in the embodiment of the present disclosure.
- Fig. 8 shows a schematic diagram of a correlation detection neural network in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the first image is input to the correlation detection neural network for processing, and multiple correlation feature maps are obtained.
- the associated information of multiple pixels in the first image can be determined.
- the surrounding pixels of a certain pixel refer to the pixels whose distance from the pixel is equal to 0, that is, the surrounding pixels of the pixel refer to the pixels adjacent to the pixel
- the correlation detection neural network can output 8 correlations Feature map.
- the correlation detection neural network can be realized through a variety of network structures.
- the correlation detection neural network may include an encoding part and a decoding part.
- the coding part may include a convolutional layer and a downsampling layer, and the decoding part may include a deconvolutional layer and/or an upsampling layer.
- the encoding part may also include a normalization layer, and the decoding part may also include a normalization layer.
- the resolution of the feature map gradually decreases, and the number of feature maps gradually increases, so as to obtain rich semantic features and image spatial features; in the decoding part, the resolution of the feature map gradually increases, and the final output feature map of the decoding part
- the resolution is the same as the resolution of the first image.
- the associated information may be an image, or may be other data forms, such as a matrix.
- inputting the first image into the correlation detection neural network for processing to obtain correlation information of multiple pixels in the first image may include: performing convolution processing on the first image to obtain a third convolution result; The third convolution result is subjected to down-sampling processing to obtain the second encoding result; based on the second encoding result, the associated information of multiple pixels in the first image is obtained.
- the first image may be convolved through the convolution layer to obtain the third convolution result.
- performing down-sampling processing based on the third convolution result to obtain the second encoding result may include: normalizing the third convolution result to obtain the third normalization result; normalizing the third The transformation result is subjected to down-sampling processing to obtain the second encoding result.
- the third convolution result can be normalized by the normalization layer to obtain the third normalized result; the third normalized result can be downsampled by the downsampling layer to obtain the second Encoding results.
- the third convolution result may be down-sampled through the down-sampling layer to obtain the second encoding result.
- determining the associated information based on the second encoding result may include: performing deconvolution processing on the second encoding result to obtain a second deconvolution result; performing normalization processing on the second deconvolution result, Get related information.
- the second encoding result can be deconvolved through the deconvolution layer to obtain the second deconvolution result; the second deconvolution result can be normalized through the normalization layer to obtain the correlation information.
- the second encoding result may be deconvolved through the deconvolution layer to obtain the associated information.
- determining the associated information based on the second encoding result may include: performing up-sampling processing on the second encoding result to obtain the second up-sampling result; normalizing the second up-sampling result to obtain the associated information .
- the second encoding result may be up-sampled through the up-sampling layer to obtain the second up-sampling result; the second up-sampling result may be normalized through the normalization layer to obtain the associated information.
- the second encoding result may be up-sampled through the up-sampling layer to obtain the associated information.
- the 3D living body detection algorithm based on the self-improvement of the depth map proposed in the embodiments of the present disclosure improves the performance of the 3D living body detection algorithm by perfecting and repairing the depth map detected by the 3D sensor.
- the first depth map is updated based on the depth prediction values and associated information of the multiple pixels to obtain the second depth map.
- Fig. 9 shows an exemplary schematic diagram of updating the depth map in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the first depth map is a depth map with missing values
- the obtained depth prediction values and associated information of multiple pixels are the initial depth estimation map and the associated feature map.
- the value depth map, the initial depth estimation map, and the associated feature map are input to the depth map update module (for example, the depth update neural network) for processing to obtain the final depth map, that is, the second depth map.
- the depth map update module for example, the depth update neural network
- the depth prediction value of the depth failure pixel and the depth prediction value of multiple surrounding pixels of the depth failure pixel are obtained from the depth prediction values of the plurality of pixels; the depth failure value is obtained from the associated information of the plurality of pixels
- the correlation between the pixel and the multiple surrounding pixels of the depth-failed pixel; based on the depth prediction value of the depth-failed pixel, the depth prediction value of the multiple surrounding pixels of the depth-failed pixel, and the relationship between the depth-failed pixel and the surrounding pixels of the depth-failed pixel The correlation degree between the two determines the updated depth value of the depth failure pixel.
- the depth invalid pixels in the depth map can be determined in various ways.
- a pixel with a depth value equal to 0 in the first depth map is determined as a depth failure pixel, or a pixel in the first depth map without a depth value is determined as a depth failure pixel.
- the depth value part of the first depth map with missing values that is, the depth value is not 0
- we believe that the depth value is correct and credible and this part is not updated and the original depth is retained value.
- the depth value of the pixel whose depth value is 0 in the first depth map is updated.
- the depth sensor may set the depth value of the depth failure pixel to one or more preset values or preset ranges.
- pixels whose depth values in the first depth map are equal to a preset value or belonging to a preset range may be determined as depth-invalidated pixels.
- the embodiment of the present disclosure may also determine the depth failure pixel in the first depth map based on other statistical methods, which is not limited in the embodiment of the present disclosure.
- the depth value of the pixel in the first image with the same position as the depth failure pixel can be determined as the depth prediction value of the depth failure pixel.
- the surrounding pixel positions of the depth failure pixel in the first image can be determined.
- the depth value of the same pixel is determined as the depth prediction value of the surrounding pixels of the depth failure pixel.
- the distance between the surrounding pixels of the depth failure pixel and the depth failure pixel is less than or equal to the first threshold.
- FIG. 10 shows a schematic diagram of surrounding pixels in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the first threshold is 0, only neighbor pixels are used as surrounding pixels.
- the neighboring pixels of pixel 5 include pixel 1, pixel 2, pixel 3, pixel 4, pixel 6, pixel 7, pixel 8, and pixel 9, then only pixel 1, pixel 2, pixel 3, pixel 4, pixel 6, Pixel 7, pixel 8, and pixel 9 serve as surrounding pixels of pixel 5.
- FIG. 11 shows another schematic diagram of surrounding pixels in a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the first threshold is 1, in addition to using neighbor pixels as surrounding pixels, neighbor pixels of neighbor pixels are also used as surrounding pixels. That is, in addition to pixels 1, pixel 2, pixel 3, pixel 4, pixel 6, pixel 7, pixel 8, and pixel 9 as surrounding pixels of pixel 5, pixels 10 to 25 are used as surrounding pixels of pixel 5.
- the depth correlation value of the depth failure pixel is determined; depth prediction based on the depth failure pixel The value and the depth associated value determine the updated depth value of the depth failure pixel.
- the effective depth value of the surrounding pixel for the depth failing pixel determines the effective depth value of the surrounding pixel for the depth failing pixel; based on each surrounding of the depth failing pixel
- the effective depth value of the pixel for the depth failure pixel and the depth prediction value of the depth failure pixel determine the updated depth value of the depth failure pixel.
- the product of the depth prediction value of a certain surrounding pixel of the depth failure pixel and the correlation degree corresponding to the surrounding pixel can be determined as the effective depth value of the surrounding pixel for the depth failure pixel, where the correlation degree corresponding to the surrounding pixel It refers to the degree of correlation between the surrounding pixels and the depth failure pixels.
- the product of the sum of the effective depth values of the depth-failed pixels for the depth-failed pixels and the first preset coefficient is determined to obtain the first product; determine the depth prediction value of the depth-failed pixels and the second preset coefficient
- the product is multiplied to obtain the second product; the sum of the first product and the second product is determined as the updated depth value of the depth failure pixel.
- the sum of the first preset coefficient and the second preset coefficient is 1.
- the degree of association between the depth failure pixel and each surrounding pixel is used as the weight of each surrounding pixel, and the depth prediction values of multiple surrounding pixels of the depth failure pixel are weighted and summed to obtain the depth failure pixel The depth of the correlation value. For example, if pixel 5 is a depth failure pixel, the depth correlation value of depth failure pixel 5 is And formula 1 can be used to determine the updated depth value F 5 ′ of the depth failure pixel 5,
- W i represents the correlation between the pixel i and the pixel 5
- F i represents the depth of the prediction value of pixel i.
- the product of the correlation between each surrounding pixel and the depth failing pixel in the multiple surrounding pixels of the depth failure pixel and the depth prediction value of each surrounding pixel is determined; the maximum value of the product is determined as the depth failure The depth associated value of the pixel.
- the sum of the depth prediction value of the depth failure pixel and the depth associated value is determined as the updated depth value of the depth failure pixel.
- the product of the depth prediction value of the depth failure pixel and the third preset coefficient is determined to obtain the third product; the product of the depth correlation value and the fourth preset coefficient is determined to obtain the fourth product; and the third product is multiplied by The sum of the fourth product is determined as the updated depth value of the depth failure pixel. In some embodiments, the sum of the third preset coefficient and the fourth preset coefficient is 1.
- the depth value of the non-depth failure pixel in the second depth map is equal to the depth value of the non-depth failure pixel in the first depth map.
- the depth value of the non-depth failure pixel may also be updated to obtain a more accurate second depth map, which can further improve the accuracy of the living body detection.
- the Bluetooth module provided in the car searches for the Bluetooth device with the preset identification, and in response to the search for the Bluetooth device with the preset identification, the Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification is established, in response to The Bluetooth pairing connection is successful, wake up and control the image acquisition module installed in the car to collect the first image of the target object, perform face recognition based on the first image, and send the door unlock to at least one door of the car in response to the successful face recognition Commands and/or open door commands, so that when a Bluetooth pairing connection is not established with a Bluetooth device with a preset logo, the face recognition module can be in a dormant state to maintain low-power operation, thereby reducing the amount of face recognition and opening the door.
- the embodiments of the present disclosure can not only meet the requirements of low-power operation, but also meet the requirements of fast opening doors.
- the living body detection and face authentication process can be automatically triggered, and it is automatically turned on after the owner passes the living body detection and face authentication. Car door.
- the method further includes: in response to a face recognition failure, activating a password unlocking module provided in the car to start a password unlocking process.
- password unlocking is an alternative to face recognition unlocking.
- the reasons for the failure of face recognition may include at least one of the result of the living body detection being that the target object is a prosthesis, the face authentication failure, the failure of image collection (for example, a camera failure), and the number of recognition times exceeding a predetermined number.
- the password unlocking process is initiated.
- the password entered by the user can be obtained through the touch screen on the B-pillar.
- the password unlocking will become invalid, for example, M is equal to 5.
- the method further includes one or both of the following: performing vehicle owner registration based on the facial image of the vehicle owner collected by the image acquisition module; performing remotely based on the facial image of the vehicle owner collected by the vehicle owner’s terminal device Register and send the registration information to the car, where the registration information includes the face image of the car owner.
- the registration of the car owner based on the face image of the car owner collected by the image acquisition module includes: when the registration button on the touch screen is detected to be clicked, the user is requested to enter a password, and after the password verification is passed, the image acquisition module is started.
- the RGB cameras in the group acquire the user’s face image, and register according to the acquired face image, and extract the facial features in the face image as pre-registered facial features to be based on the pre-registered facial features during subsequent face authentication. Compare the registered face features.
- remote registration is performed according to the face image of the vehicle owner collected by the terminal device of the vehicle owner, and the registration information is sent to the vehicle, where the registration information includes the face image of the vehicle owner.
- the car owner can send a registration request to the TSP (Telematics Service Provider) cloud through the mobile phone App (Application), where the registration request can carry the face image of the car owner; the TSP cloud sends the registration request Send to the vehicle-mounted T-Box (Telematics Box, telematics processor) of the door unlocking device.
- the vehicle-mounted T-Box activates the face recognition function according to the registration request, and uses the facial features in the face image carried in the registration request as the pre- The registered facial features are compared based on the pre-registered facial features in subsequent face authentication.
- FIG. 12 shows another flowchart of a method for unlocking a vehicle door according to an embodiment of the present disclosure.
- the vehicle door unlocking method may be executed by a vehicle door unlocking device.
- the method for unlocking the vehicle door may be implemented by a processor calling a computer readable instruction stored in the memory.
- the method for unlocking the vehicle door includes steps S21 to S24.
- step S21 the Bluetooth module installed in the car searches for a Bluetooth device with a preset identification.
- the search for a Bluetooth device with a preset identifier via the Bluetooth module provided in the car includes: when the car is in a stalled state or in a stalled state with the door locked, The Bluetooth module of the car searches for the Bluetooth device with preset identification.
- step S22 in response to searching for the Bluetooth device with the preset identifier, wake up and control the image acquisition module provided in the vehicle to acquire the first image of the target object.
- the number of Bluetooth devices with the preset identification is one.
- the number of Bluetooth devices with the preset identification is multiple; and in response to searching for the Bluetooth device with the preset identification, wake up and control the image collection set in the car
- the module collecting the first image of the target object includes: in response to searching for any Bluetooth device with a preset identification, waking up and controlling the image collecting module installed in the vehicle to collect the first image of the target object.
- the awakening and controlling the image acquisition module installed in the car to collect the first image of the target object includes: awakening the face recognition module installed in the car; The face recognition module controls the image acquisition module to acquire the first image of the target object.
- the embodiments of the present disclosure can support a larger distance by adopting Bluetooth.
- Practice shows that the time when a user carrying a Bluetooth device with a preset logo passes through this distance (the distance between the user and the car when the Bluetooth module of the car searches for the Bluetooth device with the user's preset logo), and the car wakes up the face
- the time for the recognition module to switch from the sleep state to the working state roughly matches, so that when the user arrives at the door, the face recognition module can be used to recognize the door immediately without having to wait after the user arrives at the door
- the face recognition module is awakened, which can increase the efficiency of face recognition and improve user experience.
- the embodiments of the present disclosure provide a way of waking up the face recognition module in response to searching for the Bluetooth device with the preset identification, which can better weigh the face recognition module power saving, user experience, and security. Aspects of the solution.
- the method further includes: if no face image is collected within a preset time, controlling the person The face recognition module enters a sleep state.
- the method further includes: if the face recognition fails within a preset time, controlling the face The recognition module enters a sleep state.
- step S23 face recognition is performed based on the first image.
- step S24 in response to successful face recognition, a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- sending a door unlocking instruction and/or opening a door instruction to at least one door of the vehicle includes: determining the target in response to successful face recognition The door for which the object has the authority to open the door; according to the door for which the target object has the authority to open the door, a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- the face recognition includes: living body detection and face authentication;
- the performing face recognition based on the first image includes: collecting by an image sensor in the image acquisition module The first image, and perform face authentication based on the first image and pre-registered facial features; collect the first depth map corresponding to the first image through the depth sensor in the image acquisition module, and Performing living body detection based on the first image and the first depth map.
- the performing living body detection based on the first image and the first depth map includes: updating the first depth map based on the first image to obtain a second depth map ; Based on the first image and the second depth map, determine the live detection result of the target object.
- the image sensor includes an RGB image sensor or an infrared sensor;
- the depth sensor includes a binocular infrared sensor or a time-of-flight TOF sensor.
- the TOF sensor adopts a TOF module based on an infrared band.
- the updating the first depth map based on the first image to obtain the second depth map includes: comparing the data in the first depth map based on the first image The depth value of the depth failure pixel is updated to obtain the second depth map.
- the updating the first depth map based on the first image to obtain the second depth map includes: determining a plurality of the first images based on the first image The depth prediction value and associated information of the pixel, wherein the associated information of the plurality of pixels indicates the degree of association between the plurality of pixels; based on the depth prediction value and the associated information of the plurality of pixels, the first Depth map to get the second depth map.
- the updating the first depth map based on the depth prediction values and associated information of the plurality of pixels to obtain a second depth map includes: determining the value in the first depth map Depth failure pixel; obtaining the depth prediction value of the depth failure pixel and the depth prediction value of a plurality of surrounding pixels of the depth failure pixel from the depth prediction values of the plurality of pixels; from the associated information of the plurality of pixels Obtaining the correlation between the depth failure pixel and multiple surrounding pixels of the depth failure pixel; based on the depth prediction value of the depth failure pixel, the depth prediction value of the multiple surrounding pixels of the depth failure pixel, And the degree of association between the depth failure pixel and surrounding pixels of the depth failure pixel to determine the updated depth value of the depth failure pixel.
- the depth prediction value based on the depth failure pixel, the depth prediction values of multiple surrounding pixels of the depth failure pixel, and the difference between the depth failure pixel and the depth failure pixel Determining the updated depth value of the depth failure pixel based on the correlation between multiple surrounding pixels, including: the depth prediction value of the surrounding pixels based on the depth failure pixel and the depth failure pixel and the depth failure pixel Determine the depth correlation value of the depth failure pixel; determine the updated depth of the depth failure pixel based on the depth prediction value of the depth failure pixel and the depth correlation value value.
- the determining the depth prediction value based on the depth prediction value of the surrounding pixels of the depth failure pixel and the correlation between the depth failure pixel and the multiple surrounding pixels of the depth failure pixel includes: using the correlation degree between the depth failure pixel and each surrounding pixel as the weight of each surrounding pixel, and predicting the depth of multiple surrounding pixels of the depth failure pixel The value is weighted and summed to obtain the depth associated value of the depth failure pixel.
- the determining depth prediction values of multiple pixels in the first image based on the first image includes: determining based on the first image and the first depth map Depth prediction values of multiple pixels in the first image.
- the determining the depth prediction values of multiple pixels in the first image based on the first image and the first depth map includes: combining the first image and the The first depth map is input to a depth prediction neural network for processing to obtain depth prediction values of multiple pixels in the first image.
- the determining the depth prediction values of multiple pixels in the first image based on the first image and the first depth map includes: comparing the first image and the first depth map.
- the first depth map is subjected to fusion processing to obtain a fusion result; based on the fusion result, the depth prediction values of multiple pixels in the first image are determined.
- the determining the association information of multiple pixels in the first image based on the first image includes: inputting the first image to a correlation detection neural network for processing, Obtain the associated information of multiple pixels in the first image.
- the updating the first depth map based on the first image includes: acquiring an image of the target object from the first image; and based on the image of the target object , Update the first depth map.
- the obtaining an image of the target object from the first image includes: obtaining key point information of the target object in the first image;
- the key point information is to obtain the image of the target object from the first image.
- the acquiring key point information of the target object in the first image includes: performing target detection on the first image to obtain the area where the target object is located; The image of the area where the target object is located performs key point detection to obtain the key point information of the target object in the first image.
- the updating the first depth map based on the first image to obtain a second depth map includes: obtaining a depth map of the target object from the first depth map ; Based on the first image, update the depth map of the target object to obtain the second depth map.
- the determining the live detection result of the target object based on the first image and the second depth map includes: combining the first image and the second depth map Input to the living body detection neural network for processing, and obtain the living body detection result of the target object.
- the determining the live detection result of the target object based on the first image and the second depth map includes: performing feature extraction processing on the first image to obtain the first image One feature information; performing feature extraction processing on the second depth map to obtain second feature information; and determining the live detection result of the target object based on the first feature information and the second feature information.
- the determining the live detection result of the target object based on the first feature information and the second feature information includes: comparing the first feature information and the second feature information The feature information is fused to obtain third feature information; based on the third feature information, the live detection result of the target object is determined.
- the determining the live detection result of the target object based on the third characteristic information includes: obtaining the probability that the target object is alive based on the third characteristic information; The probability that the target object is a living body determines the result of the living body detection of the target object.
- the method further includes: in response to the face recognition failure, activating a password unlocking module provided in the car to activate the password Unlocking process.
- the method further includes one or both of the following: performing vehicle owner registration based on the face image of the vehicle owner collected by the image acquisition module; and performing vehicle owner registration based on the vehicle owner’s terminal device collected
- the face image of the vehicle owner is remotely registered and the registration information is sent to the vehicle, where the registration information includes the face image of the vehicle owner.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides a vehicle door unlocking device, an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the vehicle door unlocking methods provided in the present disclosure.
- a vehicle door unlocking device an electronic device, a computer-readable storage medium, and a program, all of which can be used to implement any of the vehicle door unlocking methods provided in the present disclosure.
- FIG. 13 shows a block diagram of a vehicle door unlocking device according to an embodiment of the present disclosure.
- the vehicle door unlocking device includes: a search module 31, which is used to search for a Bluetooth device with a preset identification via a Bluetooth module provided in the car;
- the Bluetooth device establishes a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification, and in response to the successful Bluetooth pairing connection, wakes up and controls the image acquisition module set in the car to collect the second target object An image, or, in response to searching for the Bluetooth device with the preset identification, wake up and control the image acquisition module installed in the car to collect the first image of the target object;
- the face recognition module 33 is used to The first image performs face recognition;
- the unlocking module 34 is configured to send a door unlocking instruction and/or a door opening instruction to at least one door of the vehicle in response to a successful face recognition.
- the Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification is established in response to searching for the Bluetooth device with the preset identification, and in response to the successful Bluetooth pairing and connection, the face recognition module is awakened and controlled
- the image acquisition module collects the first image of the target object, and thus based on the successful Bluetooth pairing connection and then wakes up the face recognition module, it can effectively reduce the probability of falsely waking up the face recognition module, thereby improving user experience and effectively reducing The power consumption of the face recognition module.
- the search module 31 is used to search for a Bluetooth device with a preset identification via the Bluetooth module provided in the car when the car is in the flameout state or in the flameout state and the door is locked. .
- a Bluetooth device with a preset identification through the Bluetooth module before the car is turned off or there is no need to search for a preset identification through the Bluetooth module before the car is turned off and when the car is turned off but the door is not locked.
- Bluetooth devices which can further reduce power consumption.
- the number of Bluetooth devices with the preset identification is one.
- the number of Bluetooth devices with the preset identification is multiple;
- the wake-up module 32 is configured to establish a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification in response to searching for any Bluetooth device with the preset identification, or in response to searching for any preset identification
- the Bluetooth device wakes up and controls the image acquisition module installed in the car to acquire the first image of the target object.
- the wake-up module 32 includes: a wake-up sub-module for waking up the face recognition module installed in the car; a control sub-module for the wake-up face recognition module The group controls the image acquisition module to acquire the first image of the target object.
- a Bluetooth device with a preset identifier if a Bluetooth device with a preset identifier is searched, it can indicate to a large extent that a user (such as a car owner) carrying the Bluetooth device with the preset identifier has entered the search range of the Bluetooth module.
- a user such as a car owner
- the Bluetooth device with the preset logo by responding to the search for the Bluetooth device with the preset logo, establish a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset logo, and in response to the successful Bluetooth pairing connection, wake up the face recognition module and control the image acquisition module Collecting the first image of the target object, based on the successful Bluetooth pairing connection and then waking up the face recognition module, can effectively reduce the probability of falsely waking up the face recognition module, thereby improving the user experience and effectively reducing the face recognition module.
- the Bluetooth-based pairing connection method has the advantages of high security and support for larger distances.
- Practice has shown that the time when a user carrying a Bluetooth device with a preset logo reaches the car through this distance (the distance between the user and the car when the Bluetooth pairing connection is successful), and when the car wakes up, the face recognition module switches from a sleep state to a working state
- the face recognition module can be used to recognize the car door immediately without having to wait for the face recognition module to be awakened after the user arrives at the car door. Improve the efficiency of face recognition and improve user experience.
- the embodiments of the present disclosure provide a solution that can better weigh the face recognition module's power saving, user experience, and security by successfully waking up the face recognition module based on the Bluetooth pairing connection.
- the device further includes: a first control module, configured to control the face recognition module to enter a sleep state if the face image is not collected within a preset time.
- This implementation method controls the face recognition module to enter a sleep state when no face image is collected within a preset time after the face recognition module is awakened, thereby reducing power consumption.
- the device further includes: a second control module, configured to control the face recognition module to enter a sleep state if the face recognition fails within a preset time.
- This implementation method controls the face recognition module to enter the sleep state when the face recognition module fails to pass the face recognition within a preset time after waking up the face recognition module, thereby reducing power consumption.
- the unlocking module 34 is configured to: in response to successful face recognition, determine that the target object has a door opening permission; according to the door of the target object having the door opening permission, send a message to the car At least one of the doors sends a door unlock command and/or a door open command.
- the face recognition includes: living body detection and face authentication;
- the face recognition module 33 includes: a face authentication module, which is configured to pass through the image sensor in the image acquisition module The first image is collected, and face authentication is performed based on the first image and pre-registered facial features;
- the living body detection module is used to collect the corresponding first image through the depth sensor in the image collection module And performing live detection based on the first image and the first depth map.
- the living body detection is used to verify whether the target object is a living body, for example, it can be used to verify whether the target object is a human body.
- Face authentication is used to extract the facial features in the collected images, compare the facial features in the collected images with the pre-registered facial features to determine whether they belong to the same person's facial features, for example, you can determine the collected facial features Whether the facial features in the image belong to the facial features of the vehicle owner.
- the living body detection module includes: an update sub-module for updating the first depth map based on the first image to obtain a second depth map; and a determining sub-module for obtaining a second depth map based on the The first image and the second depth map determine the live detection result of the target object.
- the image sensor includes an RGB image sensor or an infrared sensor;
- the depth sensor includes a binocular infrared sensor or a time-of-flight TOF sensor.
- Using the depth map containing the target object for living body detection can fully mine the depth information of the target object, thereby improving the accuracy of living body detection.
- the embodiment of the present disclosure uses a depth map containing the human face to perform living body detection, which can fully mine the depth information of the face data, thereby improving the accuracy of living body face detection.
- the TOF sensor adopts a TOF module based on an infrared band.
- the TOF module based on the infrared band, the influence of external light on the depth map shooting can be reduced.
- the update submodule is configured to: based on the first image, update the depth value of the depth failure pixel in the first depth map to obtain the second depth map.
- the depth invalid pixel in the depth map may refer to a pixel with an invalid depth value included in the depth map, that is, a pixel whose depth value is inaccurate or clearly inconsistent with the actual situation.
- the number of depth failure pixels can be one or more. By updating the depth value of at least one depth failure pixel in the depth map, the depth value of the depth failure pixel is more accurate, which helps to improve the accuracy of living body detection.
- the update sub-module is configured to: determine the depth prediction value and associated information of multiple pixels in the first image based on the first image, wherein The association information indicates the degree of association between the plurality of pixels; based on the depth prediction values and the association information of the plurality of pixels, the first depth map is updated to obtain a second depth map.
- the update submodule is configured to: determine a depth failure pixel in the first depth map; obtain the depth prediction of the depth failure pixel from the depth prediction values of the multiple pixels Value and the depth prediction values of the multiple surrounding pixels of the depth failing pixel; obtaining the degree of association between the depth failing pixel and the plurality of surrounding pixels of the depth failing pixel from the associated information of the plurality of pixels; Based on the depth prediction value of the depth failure pixel, the depth prediction values of a plurality of surrounding pixels of the depth failure pixel, and the degree of association between the depth failure pixel and the surrounding pixels of the depth failure pixel, the determination The updated depth value of the depth failure pixel.
- the update sub-module is configured to: based on the depth prediction value of the surrounding pixels of the depth failure pixel and the relationship between the depth failure pixel and multiple surrounding pixels of the depth failure pixel The degree of association determines the depth associated value of the depth failing pixel; based on the depth prediction value of the depth failing pixel and the depth associated value, determining the updated depth value of the depth failing pixel.
- the update sub-module is configured to: use the degree of association between the depth failure pixel and each surrounding pixel as the weight of each surrounding pixel, The depth prediction values of multiple surrounding pixels are weighted and summed to obtain the depth correlation value of the depth failure pixel.
- the update submodule is configured to determine depth prediction values of multiple pixels in the first image based on the first image and the first depth map.
- the update submodule is used to: input the first image and the first depth map to a depth prediction neural network for processing, and obtain the information of multiple pixels in the first image Depth prediction value.
- the update submodule is configured to: perform fusion processing on the first image and the first depth map to obtain a fusion result; and determine the first image based on the fusion result The depth prediction value of multiple pixels in.
- the update sub-module is configured to: input the first image to a correlation detection neural network for processing, and obtain correlation information of multiple pixels in the first image.
- the update submodule is configured to: obtain an image of the target object from the first image; and update the first depth map based on the image of the target object.
- the update sub-module is used to: obtain key point information of the target object in the first image; based on the key point information of the target object, from the first image Obtain an image of the target object.
- the contour of the target object is determined based on the key point information of the target object, and the image of the target object is intercepted from the first image according to the contour of the target object.
- the position of the target object obtained through the key point information is more accurate, which is beneficial to improve the accuracy of subsequent living body detection.
- the interference of the background information in the first image on the living body detection can be reduced.
- the update submodule is used to: perform target detection on the first image to obtain the area where the target object is located; perform key point detection on the image of the area where the target object is located to obtain Key point information of the target object in the first image.
- the update submodule is configured to: obtain a depth map of the target object from the first depth map; update the depth map of the target object based on the first image, Obtain the second depth map.
- the second depth map is obtained, which can reduce the background information in the first depth map for living body detection The interference produced.
- the acquired depth map (such as the depth map collected by the depth sensor) may be partially invalid.
- the depth map may randomly cause partial failure of the depth map.
- some special paper quality can make the printed face photos produce a similar effect of large-area failure or partial failure of the depth map.
- the depth map can also be partially invalidated, and the imaging of the prosthesis on the image sensor is normal. Therefore, in the case of partial or complete failure of some depth maps, using the depth map to distinguish between the living body and the prosthesis will cause errors. Therefore, in the embodiments of the present disclosure, by repairing or updating the first depth map, and using the repaired or updated depth map for living body detection, it is beneficial to improve the accuracy of living body detection.
- the determining submodule is configured to: input the first image and the second depth map into a living body detection neural network for processing, and obtain a living body detection result of the target object.
- the determining submodule is configured to: perform feature extraction processing on the first image to obtain first feature information; perform feature extraction processing on the second depth map to obtain a second feature Information; based on the first feature information and the second feature information, determine the live detection result of the target object.
- the feature extraction process can be implemented by a neural network or other machine learning algorithms, and the type of feature information extracted can optionally be obtained by learning a sample, which is not limited in the embodiment of the present disclosure.
- the determining submodule is configured to: perform fusion processing on the first feature information and the second feature information to obtain third feature information; and determine based on the third feature information The live detection result of the target object.
- the determining submodule is configured to: obtain the probability that the target object is a living body based on the third characteristic information; determine the target object according to the probability that the target object is a living body Live test results.
- the device further includes an activation and activation module, configured to activate a password unlocking module provided in the car in response to a face recognition failure to initiate a password unlocking process.
- password unlocking is an alternative to face recognition unlocking.
- the reasons for the failure of face recognition may include at least one of the result of the living body detection being that the target object is a prosthesis, the face authentication failure, the failure of image collection (for example, a camera failure), and the number of recognition times exceeding a predetermined number.
- the password unlocking process is initiated.
- the password entered by the user can be obtained through the touch screen on the B-pillar.
- the device further includes a registration module, the registration module is used for one or both of the following: Carrying out car owner registration according to the face image of the car owner collected by the image collection module; The face image of the vehicle owner collected by the terminal device of the vehicle owner is remotely registered, and the registration information is sent to the vehicle, where the registration information includes the face image of the vehicle owner.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- Fig. 14 shows a block diagram of a vehicle face unlocking system according to an embodiment of the present disclosure.
- the vehicle face unlocking system includes: a memory 41, a face recognition module 42, an image acquisition module 43 and a Bluetooth module 44; the face recognition module 42 is connected to the memory 41, The image acquisition module 43 is connected to the Bluetooth module 44; the Bluetooth module 44 includes waking up the face recognition when the Bluetooth pairing connection with the Bluetooth device with the preset identification succeeds or the Bluetooth device with the preset identification is searched
- the microprocessor 441 of the module 42 and the Bluetooth sensor 442 connected to the microprocessor 441; the face recognition module 42 is also provided with a communication interface for connecting with the door domain controller, if the face recognition is successful, Sending control information for unlocking the door to the door domain controller based on the communication interface.
- the memory 41 may include at least one of flash memory (Flash) and DDR3 (Double Date Rate 3, third-generation double data rate) memory.
- the face recognition module 42 may be implemented by SoC (System on Chip).
- the face recognition module 42 is connected to the door domain controller through a CAN (Controller Area Network, Controller Area Network) bus.
- CAN Controller Area Network, Controller Area Network
- the image acquisition module 43 includes an image sensor and a depth sensor.
- the depth sensor includes at least one of a binocular infrared sensor and a time-of-flight TOF sensor.
- the depth sensor includes a binocular infrared sensor, and two infrared cameras of the binocular infrared sensor are arranged on both sides of the camera of the image sensor.
- the image sensor is an RGB sensor
- the camera of the image sensor is an RGB camera
- the depth sensor is a binocular infrared sensor.
- the depth sensor includes two IR (infrared) cameras and two binocular infrared sensors. Two infrared cameras are arranged on both sides of the RGB camera of the image sensor.
- the image acquisition module 43 further includes at least one fill light, the at least one fill light is arranged between the infrared camera of the binocular infrared sensor and the camera of the image sensor, and the at least one fill light includes At least one of the fill light for the image sensor and the fill light for the depth sensor.
- the fill light used for the image sensor can be a white light
- the fill light used for the image sensor can be an infrared light
- the depth sensor is a binocular Infrared sensor
- the fill light used for the depth sensor can be an infrared light.
- an infrared lamp is provided between the infrared camera of the binocular infrared sensor and the camera of the image sensor.
- the infrared lamp can use 940nm infrared.
- the fill light may be in the normally-on mode. In this example, when the camera of the image acquisition module is in the working state, the fill light is in the on state.
- the fill light can be turned on when the light is insufficient.
- the ambient light intensity can be obtained through the ambient light sensor, and when the ambient light intensity is lower than the light intensity threshold, it is determined that the light is insufficient, and the fill light is turned on.
- the image acquisition module 43 further includes a laser, and the laser is disposed between the camera of the depth sensor and the camera of the image sensor.
- the image sensor is an RGB sensor
- the camera of the image sensor is an RGB camera
- the depth sensor is a TOF sensor
- the laser is arranged between the camera of the TOF sensor and the camera of the RGB sensor.
- the laser may be a VCSEL
- the TOF sensor may collect a depth map based on the laser emitted by the VCSEL.
- the depth sensor is connected to the face recognition system 42 through an LVDS (Low-Voltage Differential Signaling) interface.
- LVDS Low-Voltage Differential Signaling
- the vehicle face unlocking system further includes: a password unlocking module 45 for unlocking a vehicle door, and the password unlocking module 45 is connected to the face recognition module 42.
- the password unlocking module 45 includes one or both of a touch screen and a keyboard.
- the touch screen is connected to the face recognition module 42 through FPD-Link (Flat Panel Display Link, flat panel display link).
- FPD-Link Flexible Panel Display Link, flat panel display link
- the vehicle face unlocking system further includes a battery module 46, and the battery module 46 is respectively connected to the microprocessor 441 and the face recognition module 42.
- the memory 41, the face recognition module 42, the Bluetooth module 44, and the battery module 46 may be built on an ECU (Electronic Control Unit, electronic control unit).
- ECU Electronic Control Unit, electronic control unit
- Fig. 15 shows a schematic diagram of a vehicle face unlocking system according to an embodiment of the present disclosure.
- the face recognition module is implemented by SoC101
- the memory includes flash memory (Flash) 102 and DDR3 memory 103
- the Bluetooth module includes a Bluetooth sensor (Bluetooth) 104 and a microprocessor (MCU, Microcontroller Unit) 105
- SoC101 SoC101
- flash memory 102 DDR3 memory 103
- Bluetooth sensor 104 microprocessor
- microprocessor 105 battery module
- Power Management Power Management
- the image acquisition module includes depth sensor (3D Camera) 200, and depth sensor 200 passes LVDS
- the interface is connected with SoC101
- the password unlocking module includes a touch screen (Touch Screen) 300
- the touch screen 300 is connected with SoC101 through FPD-Link
- SoC101 is connected with door domain controller 400 through CAN bus.
- FIG. 16 shows a schematic diagram of a car according to an embodiment of the present disclosure. As shown in FIG. 16, the vehicle includes a vehicle-mounted face unlocking system 51, and the vehicle-mounted face unlocking system 51 is connected to the door domain controller 52 of the vehicle.
- the image acquisition module is arranged outside the exterior of the vehicle.
- the image acquisition module is arranged in at least one of the following positions: a B-pillar of the vehicle, at least one door, and at least one rearview mirror.
- the face recognition module is arranged in the vehicle, and the face recognition module is connected to the door domain controller via a CAN bus.
- the embodiment of the present disclosure also proposes a computer program, including computer readable code, when the computer readable code is executed in an electronic device, the processor in the electronic device is executed to implement the above method.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 17 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- the present disclosure may be a system, method, and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Lock And Its Accessories (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims (108)
- 一种车门解锁方法,其特征在于,包括:A method for unlocking a vehicle door is characterized by comprising:经设置于车的蓝牙模块搜索预设标识的蓝牙设备;Search for Bluetooth devices with preset identification via the Bluetooth module installed in the car;响应于搜索到所述预设标识的蓝牙设备,建立所述蓝牙模块与所述预设标识的蓝牙设备的蓝牙配对连接;In response to searching for the Bluetooth device with the preset identifier, establishing a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identifier;响应于所述蓝牙配对连接成功,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像;In response to the successful Bluetooth pairing connection, waking up and controlling the image acquisition module provided in the vehicle to acquire the first image of the target object;基于所述第一图像进行人脸识别;Performing face recognition based on the first image;响应于人脸识别成功,向所述车的至少一车门发送车门解锁指令和/或打开车门指令。In response to successful face recognition, a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- 根据权利要求1所述的方法,其特征在于,所述经设置于车的蓝牙模块搜索预设标识的蓝牙设备,包括:The method according to claim 1, wherein the searching for a Bluetooth device with a preset identifier via the Bluetooth module installed in the car comprises:在所述车处于熄火状态或处于熄火且车门锁闭状态时,经设置于所述车的蓝牙模块搜索预设标识的蓝牙设备。When the car is in the off state or in the off state and the door is locked, the Bluetooth module provided in the car searches for a Bluetooth device with a preset identification.
- 根据权利要求1或2所述的方法,其特征在于,所述预设标识的蓝牙设备的数量为一个。The method according to claim 1 or 2, wherein the number of Bluetooth devices with the preset identification is one.
- 根据权利要求1或2所述的方法,其特征在于,所述预设标识的蓝牙设备的数量为多个;The method according to claim 1 or 2, wherein the number of Bluetooth devices with the preset identification is multiple;所述响应于搜索到所述预设标识的蓝牙设备,建立所述蓝牙模块与所述预设标识的蓝牙设备的蓝牙配对连接,包括:The establishing a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification in response to searching for the Bluetooth device with the preset identification includes:响应于搜索到任意一个预设标识的蓝牙设备,建立所述蓝牙模块与该预设标识的蓝牙设备的蓝牙配对连接。In response to searching for any Bluetooth device with a preset identification, a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification is established.
- 根据权利要求1至4中任意一项所述的方法,其特征在于,所述唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像,包括:The method according to any one of claims 1 to 4, wherein the awakening and controlling the image acquisition module provided in the vehicle to acquire the first image of the target object comprises:唤醒设置于所述车的人脸识别模组;Waking up the face recognition module installed in the car;经唤醒的所述人脸识别模组控制所述图像采集模组采集目标对象的第一图像。The awakened face recognition module controls the image acquisition module to acquire the first image of the target object.
- 根据权利要求5所述的方法,其特征在于,在所述唤醒设置于所述车的人脸识别模组之后,所述方法还包括:The method according to claim 5, characterized in that, after waking up the face recognition module installed in the car, the method further comprises:若在预设时间内未采集到人脸图像,则控制所述人脸识别模组进入休眠状态。If the face image is not collected within the preset time, the face recognition module is controlled to enter the sleep state.
- 根据权利要求5所述的方法,其特征在于,在所述唤醒设置于所述车的人脸识别模组之后,所述方法还包括:The method according to claim 5, characterized in that, after waking up the face recognition module installed in the car, the method further comprises:若在预设时间内未通过人脸识别,则控制所述人脸识别模组进入休眠状态。If the face recognition is not passed within the preset time, the face recognition module is controlled to enter the sleep state.
- 根据权利要求1至7中任意一项所述的方法,其特征在于,所述响应于人脸识别成功,向所述车的至少一车门发送车门解锁指令和/或打开车门指令,包括:The method according to any one of claims 1 to 7, wherein the sending a door unlocking instruction and/or opening a door instruction to at least one door of the vehicle in response to successful face recognition includes:响应于人脸识别成功,确定所述目标对象具有开门权限的车门;In response to successful face recognition, determining that the target object has a door opening permission;根据所述目标对象具有开门权限的车门,向所述车的至少一车门发送车门解锁指令和/或打开车门指令。Send a door unlocking instruction and/or a door opening instruction to at least one door of the vehicle according to the door of the target object with the door opening permission.
- 根据权利要求1至8中任意一项所述的方法,其特征在于,所述人脸识别包括:活体检测和人脸认证;The method according to any one of claims 1 to 8, wherein the face recognition comprises: living body detection and face authentication;所述基于所述第一图像进行人脸识别,包括:The performing face recognition based on the first image includes:经所述图像采集模组中的图像传感器采集所述第一图像,并基于所述第一图像和预注册的人脸特征进行人脸认证;Acquiring the first image via an image sensor in the image acquisition module, and performing face authentication based on the first image and pre-registered facial features;经所述图像采集模组中的深度传感器采集所述第一图像对应的第一深度图,并基于所述第一图像和所述第一深度图进行活体检测。A first depth map corresponding to the first image is acquired by a depth sensor in the image acquisition module, and living body detection is performed based on the first image and the first depth map.
- 根据权利要求9所述的方法,其特征在于,所述基于所述第一图像和所述第一深度图进行活体检测,包括:The method according to claim 9, wherein the performing live detection based on the first image and the first depth map comprises:基于所述第一图像,更新所述第一深度图,得到第二深度图;Based on the first image, update the first depth map to obtain a second depth map;基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果。Based on the first image and the second depth map, a live detection result of the target object is determined.
- 根据权利要求9或10所述的方法,其特征在于,所述图像传感器包括RGB图像传感器或者红外传感器;The method according to claim 9 or 10, wherein the image sensor comprises an RGB image sensor or an infrared sensor;所述深度传感器包括双目红外传感器或者飞行时间TOF传感器。The depth sensor includes a binocular infrared sensor or a time-of-flight TOF sensor.
- 根据权利要求11所述的方法,其特征在于,所述TOF传感器采用基于红外波段的TOF模组。The method according to claim 11, wherein the TOF sensor adopts a TOF module based on an infrared band.
- 根据权利要求10至12中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,得到第二深度图,包括:The method according to any one of claims 10 to 12, wherein the updating the first depth map based on the first image to obtain a second depth map comprises:基于所述第一图像,对所述第一深度图中的深度失效像素的深度值进行更新,得到所述第二深度图。Based on the first image, the depth value of the depth failure pixel in the first depth map is updated to obtain the second depth map.
- 根据权利要求10至13中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,得到第二深度图,包括:The method according to any one of claims 10 to 13, wherein the updating the first depth map based on the first image to obtain a second depth map comprises:基于所述第一图像,确定所述第一图像中多个像素的深度预测值和关联信息,其中,所述多个像素的关联信息指示所述多个像素之间的关联度;Based on the first image, determining depth prediction values and associated information of multiple pixels in the first image, wherein the associated information of the multiple pixels indicates the degree of association between the multiple pixels;基于所述多个像素的深度预测值和关联信息,更新所述第一深度图,得到第二深度图。Based on the depth prediction values and associated information of the plurality of pixels, the first depth map is updated to obtain a second depth map.
- 根据权利要求14所述的方法,其特征在于,所述基于所述多个像素的深度预测值和关联信息,更新所述第一 深度图,得到第二深度图,包括:The method according to claim 14, wherein the updating the first depth map based on the depth prediction values and associated information of the multiple pixels to obtain a second depth map comprises:确定所述第一深度图中的深度失效像素;Determining the depth failure pixels in the first depth map;从所述多个像素的深度预测值中获取所述深度失效像素的深度预测值以及所述深度失效像素的多个周围像素的深度预测值;Acquiring, from the depth prediction values of the multiple pixels, the depth prediction value of the depth failure pixel and the depth prediction values of multiple surrounding pixels of the depth failure pixel;从所述多个像素的关联信息中获取所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度;Acquiring the degree of association between the depth invalid pixel and the plurality of surrounding pixels of the depth invalid pixel from the associated information of the plurality of pixels;基于所述深度失效像素的深度预测值、所述深度失效像素的多个周围像素的深度预测值、以及所述深度失效像素与所述深度失效像素的周围像素之间的关联度,确定所述深度失效像素的更新后的深度值。Based on the depth prediction value of the depth failure pixel, the depth prediction values of a plurality of surrounding pixels of the depth failure pixel, and the degree of association between the depth failure pixel and the surrounding pixels of the depth failure pixel, the determination The updated depth value of the depth failure pixel.
- 根据权利要求15所述的方法,其特征在于,所述基于所述深度失效像素的深度预测值、所述深度失效像素的多个周围像素的深度预测值、以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的更新后的深度值,包括:The method according to claim 15, wherein the depth prediction value based on the depth failure pixel, the depth prediction value of a plurality of surrounding pixels of the depth failure pixel, and the depth failure pixel and the The correlation between multiple surrounding pixels of the depth-failed pixel and determining the updated depth value of the depth-failed pixel includes:基于所述深度失效像素的周围像素的深度预测值以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的深度关联值;Determining the depth correlation value of the depth failure pixel based on the depth prediction value of the surrounding pixels of the depth failure pixel and the degree of association between the depth failure pixel and multiple surrounding pixels of the depth failure pixel;基于所述深度失效像素的深度预测值以及所述深度关联值,确定所述深度失效像素的更新后的深度值。Determine the updated depth value of the depth failure pixel based on the depth prediction value of the depth failure pixel and the depth correlation value.
- 根据权利要求16所述的方法,其特征在于,所述基于所述深度失效像素的周围像素的深度预测值以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的深度关联值,包括:The method according to claim 16, wherein the depth prediction value based on the surrounding pixels of the depth invalid pixel and the correlation between the depth invalid pixel and a plurality of surrounding pixels of the depth invalid pixel , Determining the depth associated value of the depth failure pixel includes:将所述深度失效像素与每个周围像素之间的关联度作为所述每个周围像素的权重,对所述深度失效像素的多个周围像素的深度预测值进行加权求和处理,得到所述深度失效像素的深度关联值。The degree of association between the depth invalid pixel and each surrounding pixel is taken as the weight of each surrounding pixel, and the depth prediction values of multiple surrounding pixels of the depth invalid pixel are weighted and summed to obtain the The depth associated value of the depth failure pixel.
- 根据权利要求14至17中任意一项所述的方法,其特征在于,所述基于所述第一图像,确定所述第一图像中多个像素的深度预测值,包括:The method according to any one of claims 14 to 17, wherein the determining the depth prediction values of multiple pixels in the first image based on the first image comprises:基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值。Based on the first image and the first depth map, the depth prediction values of a plurality of pixels in the first image are determined.
- 根据权利要求18所述的方法,其特征在于,所述基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值,包括:The method according to claim 18, wherein the determining the depth prediction values of multiple pixels in the first image based on the first image and the first depth map comprises:将所述第一图像和所述第一深度图输入到深度预测神经网络进行处理,得到所述第一图像中多个像素的深度预测值。The first image and the first depth map are input to a depth prediction neural network for processing to obtain depth prediction values of multiple pixels in the first image.
- 根据权利要求18或19所述的方法,其特征在于,所述基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值,包括:The method according to claim 18 or 19, wherein the determining the depth prediction values of multiple pixels in the first image based on the first image and the first depth map comprises:对所述第一图像和所述第一深度图进行融合处理,得到融合结果;Performing fusion processing on the first image and the first depth map to obtain a fusion result;基于所述融合结果,确定所述第一图像中多个像素的深度预测值。Based on the fusion result, the depth prediction values of multiple pixels in the first image are determined.
- 根据权利要求14至20中任意一项所述的方法,其特征在于,所述基于所述第一图像,确定所述第一图像中多个像素的关联信息,包括:The method according to any one of claims 14 to 20, wherein the determining the associated information of multiple pixels in the first image based on the first image comprises:将所述第一图像输入到关联度检测神经网络进行处理,得到所述第一图像中多个像素的关联信息。The first image is input to the correlation detection neural network for processing, and correlation information of multiple pixels in the first image is obtained.
- 根据权利要求10至21中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,包括:The method according to any one of claims 10 to 21, wherein the updating the first depth map based on the first image comprises:从所述第一图像中获取所述目标对象的图像;Acquiring an image of the target object from the first image;基于所述目标对象的图像,更新所述第一深度图。Based on the image of the target object, the first depth map is updated.
- 根据权利要求22所述的方法,其特征在于,所述从所述第一图像中获取所述目标对象的图像,包括:The method according to claim 22, wherein said acquiring an image of said target object from said first image comprises:获取所述第一图像中所述目标对象的关键点信息;Acquiring key point information of the target object in the first image;基于所述目标对象的关键点信息,从所述第一图像中获取所述目标对象的图像。Based on the key point information of the target object, an image of the target object is acquired from the first image.
- 根据权利要求23所述的方法,其特征在于,所述获取所述第一图像中所述目标对象的关键点信息,包括:The method according to claim 23, wherein said acquiring key point information of said target object in said first image comprises:对所述第一图像进行目标检测,得到所述目标对象所在区域;Performing target detection on the first image to obtain the area where the target object is located;对所述目标对象所在区域的图像进行关键点检测,得到所述第一图像中所述目标对象的关键点信息。Perform key point detection on the image of the area where the target object is located to obtain key point information of the target object in the first image.
- 根据权利要求10至24中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,得到第二深度图,包括:The method according to any one of claims 10 to 24, wherein said updating said first depth map based on said first image to obtain a second depth map comprises:从所述第一深度图中获取所述目标对象的深度图;Acquiring a depth map of the target object from the first depth map;基于所述第一图像,更新所述目标对象的深度图,得到所述第二深度图。Based on the first image, update the depth map of the target object to obtain the second depth map.
- 根据权利要求10至25中任意一项所述的方法,其特征在于,所述基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果,包括:The method according to any one of claims 10 to 25, wherein the determining the live detection result of the target object based on the first image and the second depth map comprises:将所述第一图像和所述第二深度图输入到活体检测神经网络进行处理,得到所述目标对象的活体检测结果。The first image and the second depth map are input to a living body detection neural network for processing to obtain a living body detection result of the target object.
- 根据权利要求10至26中任意一项所述的方法,其特征在于,所述基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果,包括:The method according to any one of claims 10 to 26, wherein the determining the live detection result of the target object based on the first image and the second depth map comprises:对所述第一图像进行特征提取处理,得到第一特征信息;Performing feature extraction processing on the first image to obtain first feature information;对所述第二深度图进行特征提取处理,得到第二特征信息;Performing feature extraction processing on the second depth map to obtain second feature information;基于所述第一特征信息和所述第二特征信息,确定所述目标对象的活体检测结果。Based on the first feature information and the second feature information, a live detection result of the target object is determined.
- 根据权利要求27所述的方法,其特征在于,所述基于所述第一特征信息和所述第二特征信息,确定所述目标对象的活体检测结果,包括:The method according to claim 27, wherein the determining the live detection result of the target object based on the first characteristic information and the second characteristic information comprises:对所述第一特征信息和所述第二特征信息进行融合处理,得到第三特征信息;Performing fusion processing on the first feature information and the second feature information to obtain third feature information;基于所述第三特征信息,确定所述目标对象的活体检测结果。Based on the third characteristic information, a living body detection result of the target object is determined.
- 根据权利要求28所述的方法,其特征在于,所述基于所述第三特征信息,确定所述目标对象的活体检测结果,包括:The method according to claim 28, wherein the determining the live detection result of the target object based on the third characteristic information comprises:基于所述第三特征信息,得到所述目标对象为活体的概率;Obtain the probability that the target object is a living body based on the third characteristic information;根据所述目标对象为活体的概率,确定所述目标对象的活体检测结果。Determine the live detection result of the target object according to the probability that the target object is a living body.
- 根据权利要求1至29中任意一项所述的方法,其特征在于,在所述基于所述第一图像进行人脸识别之后,所述方法还包括:The method according to any one of claims 1 to 29, characterized in that, after the face recognition is performed based on the first image, the method further comprises:响应于人脸识别失败,激活设置于所述车的密码解锁模块以启动密码解锁流程。In response to the face recognition failure, the password unlocking module provided in the car is activated to start the password unlocking process.
- 根据权利要求1至30中任意一项所述的方法,其特征在于,所述方法还包括以下一项或两项:The method according to any one of claims 1 to 30, wherein the method further comprises one or both of the following:根据所述图像采集模组采集的车主的人脸图像进行车主注册;Carrying out vehicle owner registration according to the face image of the vehicle owner collected by the image acquisition module;根据所述车主的终端设备采集的所述车主的人脸图像进行远程注册,并将注册信息发送到所述车上,其中,所述注册信息包括所述车主的人脸图像。Perform remote registration according to the face image of the vehicle owner collected by the terminal device of the vehicle owner, and send registration information to the vehicle, where the registration information includes the face image of the vehicle owner.
- 一种车门解锁方法,其特征在于,包括:A method for unlocking a vehicle door is characterized by comprising:经设置于车的蓝牙模块搜索预设标识的蓝牙设备;Search for Bluetooth devices with preset identification via the Bluetooth module installed in the car;响应于搜索到所述预设标识的蓝牙设备,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像;In response to searching for the Bluetooth device with the preset identifier, awakening and controlling the image acquisition module provided in the vehicle to acquire the first image of the target object;基于所述第一图像进行人脸识别;Performing face recognition based on the first image;响应于人脸识别成功,向所述车的至少一车门发送车门解锁指令和/或打开车门指令。In response to successful face recognition, a door unlocking instruction and/or a door opening instruction are sent to at least one door of the vehicle.
- 根据权利要求32所述的方法,其特征在于,所述经设置于车的蓝牙模块搜索预设标识的蓝牙设备,包括:The method according to claim 32, wherein the searching for a Bluetooth device with a preset identifier via the Bluetooth module installed in the car comprises:在所述车处于熄火状态或处于熄火且车门锁闭状态时,经设置于所述车的蓝牙模块搜索预设标识的蓝牙设备。When the car is in the off state or in the off state and the door is locked, the Bluetooth module provided in the car searches for a Bluetooth device with a preset identification.
- 根据权利要求31或32所述的方法,其特征在于,所述预设标识的蓝牙设备的数量为一个。The method according to claim 31 or 32, wherein the number of Bluetooth devices with the preset identification is one.
- 根据权利要求31或32所述的方法,其特征在于,所述预设标识的蓝牙设备的数量为多个;The method according to claim 31 or 32, wherein the number of Bluetooth devices with the preset identification is multiple;所述响应于搜索到所述预设标识的蓝牙设备,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像,包括:The step of waking up and controlling the image acquisition module provided in the vehicle to acquire the first image of the target object in response to searching for the Bluetooth device with the preset identifier includes:响应于搜索到任意一个预设标识的蓝牙设备,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像。In response to searching for any Bluetooth device with a preset identifier, wake up and control the image acquisition module provided in the vehicle to acquire the first image of the target object.
- 根据权利要求32至35中任意一项所述的方法,其特征在于,所述唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像,包括:The method according to any one of claims 32 to 35, wherein the awakening and controlling the image acquisition module provided in the vehicle to acquire the first image of the target object comprises:唤醒设置于所述车的人脸识别模组;Waking up the face recognition module installed in the car;经唤醒的所述人脸识别模组控制所述图像采集模组采集目标对象的第一图像。The awakened face recognition module controls the image acquisition module to acquire the first image of the target object.
- 根据权利要求36所述的方法,其特征在于,在所述唤醒设置于所述车的人脸识别模组之后,所述方法还包括:The method according to claim 36, characterized in that, after waking up the face recognition module installed in the car, the method further comprises:若在预设时间内未采集到人脸图像,则控制所述人脸识别模组进入休眠状态。If the face image is not collected within the preset time, the face recognition module is controlled to enter the sleep state.
- 根据权利要求36所述的方法,其特征在于,在所述唤醒设置于所述车的人脸识别模组之后,所述方法还包括:The method according to claim 36, characterized in that, after waking up the face recognition module installed in the car, the method further comprises:若在预设时间内未通过人脸识别,则控制所述人脸识别模组进入休眠状态。If the face recognition is not passed within the preset time, the face recognition module is controlled to enter the sleep state.
- 根据权利要求32至38中任意一项所述的方法,其特征在于,所述响应于人脸识别成功,向所述车的至少一车门发送车门解锁指令和/或打开车门指令,包括:The method according to any one of claims 32 to 38, wherein in response to successful face recognition, sending a door unlocking instruction and/or a door opening instruction to at least one door of the vehicle comprises:响应于人脸识别成功,确定所述目标对象具有开门权限的车门;In response to successful face recognition, determining that the target object has a door opening permission;根据所述目标对象具有开门权限的车门,向所述车的至少一车门发送车门解锁指令和/或打开车门指令。Send a door unlocking instruction and/or a door opening instruction to at least one door of the vehicle according to the door of the target object with the door opening permission.
- 根据权利要求32至39中任意一项所述的方法,其特征在于,所述人脸识别包括:活体检测和人脸认证;The method according to any one of claims 32 to 39, wherein the face recognition comprises: living body detection and face authentication;所述基于所述第一图像进行人脸识别,包括:The performing face recognition based on the first image includes:经所述图像采集模组中的图像传感器采集所述第一图像,并基于所述第一图像和预注册的人脸特征进行人脸认证;Acquiring the first image via an image sensor in the image acquisition module, and performing face authentication based on the first image and pre-registered facial features;经所述图像采集模组中的深度传感器采集所述第一图像对应的第一深度图,并基于所述第一图像和所述第一深度图进行活体检测。A first depth map corresponding to the first image is acquired by a depth sensor in the image acquisition module, and living body detection is performed based on the first image and the first depth map.
- 根据权利要求40所述的方法,其特征在于,所述基于所述第一图像和所述第一深度图进行活体检测,包括:The method according to claim 40, wherein said performing live detection based on said first image and said first depth map comprises:基于所述第一图像,更新所述第一深度图,得到第二深度图;Based on the first image, update the first depth map to obtain a second depth map;基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果。Based on the first image and the second depth map, a live detection result of the target object is determined.
- 根据权利要求40或41所述的方法,其特征在于,所述图像传感器包括RGB图像传感器或者红外传感器;The method according to claim 40 or 41, wherein the image sensor comprises an RGB image sensor or an infrared sensor;所述深度传感器包括双目红外传感器或者飞行时间TOF传感器。The depth sensor includes a binocular infrared sensor or a time-of-flight TOF sensor.
- 根据权利要求42所述的方法,其特征在于,所述TOF传感器采用基于红外波段的TOF模组。42. The method according to claim 42, wherein the TOF sensor uses a TOF module based on an infrared band.
- 根据权利要求41至43中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,得到第二深度图,包括:The method according to any one of claims 41 to 43, wherein said updating said first depth map based on said first image to obtain a second depth map comprises:基于所述第一图像,对所述第一深度图中的深度失效像素的深度值进行更新,得到所述第二深度图。Based on the first image, the depth value of the depth failure pixel in the first depth map is updated to obtain the second depth map.
- 根据权利要求41至44中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,得到第二深度图,包括:The method according to any one of claims 41 to 44, wherein said updating said first depth map based on said first image to obtain a second depth map comprises:基于所述第一图像,确定所述第一图像中多个像素的深度预测值和关联信息,其中,所述多个像素的关联信息指示所述多个像素之间的关联度;Based on the first image, determining depth prediction values and associated information of multiple pixels in the first image, wherein the associated information of the multiple pixels indicates the degree of association between the multiple pixels;基于所述多个像素的深度预测值和关联信息,更新所述第一深度图,得到第二深度图。Based on the depth prediction values and associated information of the plurality of pixels, the first depth map is updated to obtain a second depth map.
- 根据权利要求45所述的方法,其特征在于,所述基于所述多个像素的深度预测值和关联信息,更新所述第一深度图,得到第二深度图,包括:The method according to claim 45, wherein the updating the first depth map based on the depth prediction values and associated information of the multiple pixels to obtain a second depth map comprises:确定所述第一深度图中的深度失效像素;Determining the depth failure pixels in the first depth map;从所述多个像素的深度预测值中获取所述深度失效像素的深度预测值以及所述深度失效像素的多个周围像素的深度预测值;Acquiring, from the depth prediction values of the multiple pixels, the depth prediction value of the depth failure pixel and the depth prediction values of multiple surrounding pixels of the depth failure pixel;从所述多个像素的关联信息中获取所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度;Acquiring the degree of association between the depth invalid pixel and the plurality of surrounding pixels of the depth invalid pixel from the associated information of the plurality of pixels;基于所述深度失效像素的深度预测值、所述深度失效像素的多个周围像素的深度预测值、以及所述深度失效像素与所述深度失效像素的周围像素之间的关联度,确定所述深度失效像素的更新后的深度值。Based on the depth prediction value of the depth failure pixel, the depth prediction values of a plurality of surrounding pixels of the depth failure pixel, and the degree of association between the depth failure pixel and the surrounding pixels of the depth failure pixel, the determination The updated depth value of the depth failure pixel.
- 根据权利要求46所述的方法,其特征在于,所述基于所述深度失效像素的深度预测值、所述深度失效像素的多个周围像素的深度预测值、以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的更新后的深度值,包括:The method of claim 46, wherein the depth prediction value based on the depth failure pixel, the depth prediction value of a plurality of surrounding pixels of the depth failure pixel, and the depth failure pixel and the The correlation between multiple surrounding pixels of the depth-failed pixel and determining the updated depth value of the depth-failed pixel includes:基于所述深度失效像素的周围像素的深度预测值以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的深度关联值;Determining the depth correlation value of the depth failure pixel based on the depth prediction value of the surrounding pixels of the depth failure pixel and the degree of association between the depth failure pixel and multiple surrounding pixels of the depth failure pixel;基于所述深度失效像素的深度预测值以及所述深度关联值,确定所述深度失效像素的更新后的深度值。Determine the updated depth value of the depth failure pixel based on the depth prediction value of the depth failure pixel and the depth correlation value.
- 根据权利要求47所述的方法,其特征在于,所述基于所述深度失效像素的周围像素的深度预测值以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的深度关联值,包括:The method according to claim 47, wherein the depth prediction value based on the surrounding pixels of the depth failing pixel and the correlation between the depth failing pixel and the plurality of surrounding pixels of the depth failing pixel , Determining the depth associated value of the depth failure pixel includes:将所述深度失效像素与每个周围像素之间的关联度作为所述每个周围像素的权重,对所述深度失效像素的多个周围像素的深度预测值进行加权求和处理,得到所述深度失效像素的深度关联值。The degree of association between the depth invalid pixel and each surrounding pixel is taken as the weight of each surrounding pixel, and the depth prediction values of multiple surrounding pixels of the depth invalid pixel are weighted and summed to obtain the The depth associated value of the depth failure pixel.
- 根据权利要求45至48中任意一项所述的方法,其特征在于,所述基于所述第一图像,确定所述第一图像中多个像素的深度预测值,包括:The method according to any one of claims 45 to 48, wherein the determining the depth prediction values of multiple pixels in the first image based on the first image comprises:基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值。Based on the first image and the first depth map, the depth prediction values of a plurality of pixels in the first image are determined.
- 根据权利要求49所述的方法,其特征在于,所述基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值,包括:The method of claim 49, wherein the determining the depth prediction values of multiple pixels in the first image based on the first image and the first depth map comprises:将所述第一图像和所述第一深度图输入到深度预测神经网络进行处理,得到所述第一图像中多个像素的深度预测 值。The first image and the first depth map are input to a depth prediction neural network for processing to obtain depth prediction values of multiple pixels in the first image.
- 根据权利要求49或50所述的方法,其特征在于,所述基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值,包括:The method according to claim 49 or 50, wherein the determining depth prediction values of multiple pixels in the first image based on the first image and the first depth map comprises:对所述第一图像和所述第一深度图进行融合处理,得到融合结果;Performing fusion processing on the first image and the first depth map to obtain a fusion result;基于所述融合结果,确定所述第一图像中多个像素的深度预测值。Based on the fusion result, the depth prediction values of multiple pixels in the first image are determined.
- 根据权利要求45至51中任意一项所述的方法,其特征在于,所述基于所述第一图像,确定所述第一图像中多个像素的关联信息,包括:The method according to any one of claims 45 to 51, wherein the determining the associated information of multiple pixels in the first image based on the first image comprises:将所述第一图像输入到关联度检测神经网络进行处理,得到所述第一图像中多个像素的关联信息。The first image is input to the correlation detection neural network for processing, and correlation information of multiple pixels in the first image is obtained.
- 根据权利要求41至52中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,包括:The method according to any one of claims 41 to 52, wherein the updating the first depth map based on the first image comprises:从所述第一图像中获取所述目标对象的图像;Acquiring an image of the target object from the first image;基于所述目标对象的图像,更新所述第一深度图。Based on the image of the target object, the first depth map is updated.
- 根据权利要求53所述的方法,其特征在于,所述从所述第一图像中获取所述目标对象的图像,包括:The method according to claim 53, wherein said acquiring an image of said target object from said first image comprises:获取所述第一图像中所述目标对象的关键点信息;Acquiring key point information of the target object in the first image;基于所述目标对象的关键点信息,从所述第一图像中获取所述目标对象的图像。Based on the key point information of the target object, an image of the target object is acquired from the first image.
- 根据权利要求54所述的方法,其特征在于,所述获取所述第一图像中所述目标对象的关键点信息,包括:The method according to claim 54, wherein the acquiring key point information of the target object in the first image comprises:对所述第一图像进行目标检测,得到所述目标对象所在区域;Performing target detection on the first image to obtain the area where the target object is located;对所述目标对象所在区域的图像进行关键点检测,得到所述第一图像中所述目标对象的关键点信息。Perform key point detection on the image of the area where the target object is located to obtain key point information of the target object in the first image.
- 根据权利要求41至55中任意一项所述的方法,其特征在于,所述基于所述第一图像,更新所述第一深度图,得到第二深度图,包括:The method according to any one of claims 41 to 55, wherein said updating said first depth map based on said first image to obtain a second depth map comprises:从所述第一深度图中获取所述目标对象的深度图;Acquiring a depth map of the target object from the first depth map;基于所述第一图像,更新所述目标对象的深度图,得到所述第二深度图。Based on the first image, update the depth map of the target object to obtain the second depth map.
- 根据权利要求41至56中任意一项所述的方法,其特征在于,所述基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果,包括:The method according to any one of claims 41 to 56, wherein the determining the live detection result of the target object based on the first image and the second depth map comprises:将所述第一图像和所述第二深度图输入到活体检测神经网络进行处理,得到所述目标对象的活体检测结果。The first image and the second depth map are input to a living body detection neural network for processing to obtain a living body detection result of the target object.
- 根据权利要求41至57中任意一项所述的方法,其特征在于,所述基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果,包括:The method according to any one of claims 41 to 57, wherein the determining the live detection result of the target object based on the first image and the second depth map comprises:对所述第一图像进行特征提取处理,得到第一特征信息;Performing feature extraction processing on the first image to obtain first feature information;对所述第二深度图进行特征提取处理,得到第二特征信息;Performing feature extraction processing on the second depth map to obtain second feature information;基于所述第一特征信息和所述第二特征信息,确定所述目标对象的活体检测结果。Based on the first feature information and the second feature information, a live detection result of the target object is determined.
- 根据权利要求58所述的方法,其特征在于,所述基于所述第一特征信息和所述第二特征信息,确定所述目标对象的活体检测结果,包括:The method according to claim 58, wherein the determining the live detection result of the target object based on the first characteristic information and the second characteristic information comprises:对所述第一特征信息和所述第二特征信息进行融合处理,得到第三特征信息;Performing fusion processing on the first feature information and the second feature information to obtain third feature information;基于所述第三特征信息,确定所述目标对象的活体检测结果。Based on the third characteristic information, a living body detection result of the target object is determined.
- 根据权利要求59所述的方法,其特征在于,所述基于所述第三特征信息,确定所述目标对象的活体检测结果,包括:The method according to claim 59, wherein the determining the living body detection result of the target object based on the third characteristic information comprises:基于所述第三特征信息,得到所述目标对象为活体的概率;Obtain the probability that the target object is a living body based on the third characteristic information;根据所述目标对象为活体的概率,确定所述目标对象的活体检测结果。Determine the live detection result of the target object according to the probability that the target object is a living body.
- 根据权利要求32至60中任意一项所述的方法,其特征在于,在所述基于所述第一图像进行人脸识别之后,所述方法还包括:The method according to any one of claims 32 to 60, characterized in that, after the face recognition is performed based on the first image, the method further comprises:响应于人脸识别失败,激活设置于所述车的密码解锁模块以启动密码解锁流程。In response to the face recognition failure, the password unlocking module provided in the car is activated to start the password unlocking process.
- 根据权利要求32至61中任意一项所述的方法,其特征在于,所述方法还包括以下一项或两项:The method according to any one of claims 32 to 61, wherein the method further comprises one or both of the following:根据所述图像采集模组采集的车主的人脸图像进行车主注册;Carrying out vehicle owner registration according to the face image of the vehicle owner collected by the image acquisition module;根据所述车主的终端设备采集的所述车主的人脸图像进行远程注册,并将注册信息发送到所述车上,其中,所述注册信息包括所述车主的人脸图像。Perform remote registration according to the face image of the vehicle owner collected by the terminal device of the vehicle owner, and send registration information to the vehicle, where the registration information includes the face image of the vehicle owner.
- 一种车门解锁装置,其特征在于,包括:A vehicle door unlocking device, characterized in that it comprises:搜索模块,用于经设置于车的蓝牙模块搜索预设标识的蓝牙设备;The search module is used to search for the Bluetooth device with the preset identification via the Bluetooth module installed in the car;唤醒模块,用于响应于搜索到所述预设标识的蓝牙设备,建立所述蓝牙模块与所述预设标识的蓝牙设备的蓝牙配对连接,并响应于所述蓝牙配对连接成功,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像,或者,响应于搜索到所述预设标识的蓝牙设备,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像;The wake-up module is used to establish a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset identification in response to searching for the Bluetooth device with the preset identification, and to wake up and control in response to the successful Bluetooth pairing connection The image acquisition module provided in the vehicle collects the first image of the target object, or, in response to searching for the Bluetooth device with the preset identification, wakes up and controls the image acquisition module provided in the vehicle to acquire the target object First image人脸识别模块,用于基于所述第一图像进行人脸识别;A face recognition module, configured to perform face recognition based on the first image;解锁模块,用于响应于人脸识别成功,向所述车的至少一车门发送车门解锁指令和/或打开车门指令。The unlocking module is used for sending a door unlocking instruction and/or opening a door instruction to at least one door of the vehicle in response to successful face recognition.
- 根据权利要求63所述的装置,其特征在于,所述搜索模块用于:The device according to claim 63, wherein the search module is configured to:在所述车处于熄火状态或处于熄火且车门锁闭状态时,经设置于所述车的蓝牙模块搜索预设标识的蓝牙设备。When the car is in the off state or in the off state and the door is locked, the Bluetooth module provided in the car searches for a Bluetooth device with a preset identification.
- 根据权利要求63或64所述的装置,其特征在于,所述预设标识的蓝牙设备的数量为一个。The apparatus according to claim 63 or 64, wherein the number of Bluetooth devices with the preset identification is one.
- 根据权利要求63或64所述的装置,其特征在于,所述预设标识的蓝牙设备的数量为多个;The device according to claim 63 or 64, wherein the number of Bluetooth devices with the preset identification is multiple;所述唤醒模块用于:The wake-up module is used for:响应于搜索到任意一个预设标识的蓝牙设备,建立所述蓝牙模块与该预设标识的蓝牙设备的蓝牙配对连接,或者,响应于搜索到任意一个预设标识的蓝牙设备,唤醒并控制设置于所述车的图像采集模组采集目标对象的第一图像。In response to searching for any Bluetooth device with a preset logo, establish a Bluetooth pairing connection between the Bluetooth module and the Bluetooth device with the preset logo, or, in response to searching for any Bluetooth device with a preset logo, wake up and control settings The image acquisition module in the vehicle acquires a first image of the target object.
- 根据权利要求63至66中任意一项所述的装置,其特征在于,所述唤醒模块包括:The device according to any one of claims 63 to 66, wherein the wake-up module comprises:唤醒子模块,用于唤醒设置于所述车的人脸识别模组;The wake-up sub-module is used to wake up the face recognition module installed in the car;控制子模块,用于经唤醒的所述人脸识别模组控制所述图像采集模组采集目标对象的第一图像。The control sub-module is used for controlling the image acquisition module to acquire the first image of the target object by the awakened face recognition module.
- 根据权利要求67所述的装置,其特征在于,所述装置还包括:The device according to claim 67, wherein the device further comprises:第一控制模块,用于若在预设时间内未采集到人脸图像,则控制所述人脸识别模组进入休眠状态。The first control module is configured to control the face recognition module to enter a sleep state if the face image is not collected within a preset time.
- 根据权利要求67所述的装置,其特征在于,所述装置还包括:The device according to claim 67, wherein the device further comprises:第二控制模块,用于若在预设时间内未通过人脸识别,则控制所述人脸识别模组进入休眠状态。The second control module is configured to control the face recognition module to enter a sleep state if the face recognition fails within the preset time.
- 根据权利要求63至69中任意一项所述的方法,其特征在于,所述解锁模块用于:The method according to any one of claims 63 to 69, wherein the unlocking module is configured to:响应于人脸识别成功,确定所述目标对象具有开门权限的车门;In response to successful face recognition, determining that the target object has a door opening permission;根据所述目标对象具有开门权限的车门,向所述车的至少一车门发送车门解锁指令和/或打开车门指令。Send a door unlocking instruction and/or a door opening instruction to at least one door of the vehicle according to the door of the target object with the door opening permission.
- 根据权利要求63至70中任意一项所述的装置,其特征在于,所述人脸识别包括:活体检测和人脸认证;The device according to any one of claims 63 to 70, wherein the face recognition comprises: living body detection and face authentication;所述人脸识别模块包括:The face recognition module includes:人脸认证模块,用于经所述图像采集模组中的图像传感器采集所述第一图像,并基于所述第一图像和预注册的人脸特征进行人脸认证;The face authentication module is configured to collect the first image via an image sensor in the image acquisition module, and perform face authentication based on the first image and pre-registered facial features;活体检测模块,用于经所述图像采集模组中的深度传感器采集所述第一图像对应的第一深度图,并基于所述第一图像和所述第一深度图进行活体检测。The living body detection module is configured to collect a first depth map corresponding to the first image via a depth sensor in the image acquisition module, and perform living body detection based on the first image and the first depth map.
- 根据权利要求71所述的装置,其特征在于,所述活体检测模块包括:The device according to claim 71, wherein the living body detection module comprises:更新子模块,用于基于所述第一图像,更新所述第一深度图,得到第二深度图;An update sub-module, configured to update the first depth map based on the first image to obtain a second depth map;确定子模块,用于基于所述第一图像和所述第二深度图,确定所述目标对象的活体检测结果。The determining sub-module is configured to determine the living body detection result of the target object based on the first image and the second depth map.
- 根据权利要求71或72所述的装置,其特征在于,所述图像传感器包括RGB图像传感器或者红外传感器;The device according to claim 71 or 72, wherein the image sensor comprises an RGB image sensor or an infrared sensor;所述深度传感器包括双目红外传感器或者飞行时间TOF传感器。The depth sensor includes a binocular infrared sensor or a time-of-flight TOF sensor.
- 根据权利要求73所述的装置,其特征在于,所述TOF传感器采用基于红外波段的TOF模组。The device according to claim 73, wherein the TOF sensor adopts a TOF module based on an infrared band.
- 根据权利要求72至74中任意一项所述的装置,其特征在于,所述更新子模块用于:The device according to any one of claims 72 to 74, wherein the update submodule is configured to:基于所述第一图像,对所述第一深度图中的深度失效像素的深度值进行更新,得到所述第二深度图。Based on the first image, the depth value of the depth failure pixel in the first depth map is updated to obtain the second depth map.
- 根据权利要求72至75中任意一项所述的装置,其特征在于,所述更新子模块用于:The device according to any one of claims 72 to 75, wherein the update submodule is configured to:基于所述第一图像,确定所述第一图像中多个像素的深度预测值和关联信息,其中,所述多个像素的关联信息指示所述多个像素之间的关联度;Based on the first image, determining depth prediction values and associated information of multiple pixels in the first image, wherein the associated information of the multiple pixels indicates the degree of association between the multiple pixels;基于所述多个像素的深度预测值和关联信息,更新所述第一深度图,得到第二深度图。Based on the depth prediction values and associated information of the plurality of pixels, the first depth map is updated to obtain a second depth map.
- 根据权利要求76所述的装置,其特征在于,所述更新子模块用于:The device according to claim 76, wherein the update submodule is configured to:确定所述第一深度图中的深度失效像素;Determining the depth failure pixels in the first depth map;从所述多个像素的深度预测值中获取所述深度失效像素的深度预测值以及所述深度失效像素的多个周围像素的深 度预测值;Acquiring the depth prediction value of the depth failure pixel and the depth prediction values of the multiple surrounding pixels of the depth failure pixel from the depth prediction values of the plurality of pixels;从所述多个像素的关联信息中获取所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度;Acquiring the degree of association between the depth invalid pixel and the plurality of surrounding pixels of the depth invalid pixel from the associated information of the plurality of pixels;基于所述深度失效像素的深度预测值、所述深度失效像素的多个周围像素的深度预测值、以及所述深度失效像素与所述深度失效像素的周围像素之间的关联度,确定所述深度失效像素的更新后的深度值。Based on the depth prediction value of the depth failure pixel, the depth prediction values of a plurality of surrounding pixels of the depth failure pixel, and the degree of association between the depth failure pixel and the surrounding pixels of the depth failure pixel, the determination The updated depth value of the depth failure pixel.
- 根据权利要求77所述的装置,其特征在于,所述更新子模块用于:The device according to claim 77, wherein the update submodule is configured to:基于所述深度失效像素的周围像素的深度预测值以及所述深度失效像素与所述深度失效像素的多个周围像素之间的关联度,确定所述深度失效像素的深度关联值;Determining the depth correlation value of the depth failure pixel based on the depth prediction value of the surrounding pixels of the depth failure pixel and the degree of association between the depth failure pixel and multiple surrounding pixels of the depth failure pixel;基于所述深度失效像素的深度预测值以及所述深度关联值,确定所述深度失效像素的更新后的深度值。Determine the updated depth value of the depth failure pixel based on the depth prediction value of the depth failure pixel and the depth correlation value.
- 根据权利要求78所述的装置,其特征在于,所述更新子模块用于:The device according to claim 78, wherein the update submodule is configured to:将所述深度失效像素与每个周围像素之间的关联度作为所述每个周围像素的权重,对所述深度失效像素的多个周围像素的深度预测值进行加权求和处理,得到所述深度失效像素的深度关联值。The degree of association between the depth invalid pixel and each surrounding pixel is taken as the weight of each surrounding pixel, and the depth prediction values of multiple surrounding pixels of the depth invalid pixel are weighted and summed to obtain the The depth associated value of the depth failure pixel.
- 根据权利要求76至79中任意一项所述的装置,其特征在于,所述更新子模块用于:The device according to any one of claims 76 to 79, wherein the update submodule is configured to:基于所述第一图像和所述第一深度图,确定所述第一图像中多个像素的深度预测值。Based on the first image and the first depth map, the depth prediction values of a plurality of pixels in the first image are determined.
- 根据权利要求80所述的装置,其特征在于,所述更新子模块用于:The device according to claim 80, wherein the update submodule is configured to:将所述第一图像和所述第一深度图输入到深度预测神经网络进行处理,得到所述第一图像中多个像素的深度预测值。The first image and the first depth map are input to a depth prediction neural network for processing to obtain depth prediction values of multiple pixels in the first image.
- 根据权利要求80或81所述的装置,其特征在于,所述更新子模块用于:The device according to claim 80 or 81, wherein the update submodule is configured to:对所述第一图像和所述第一深度图进行融合处理,得到融合结果;Performing fusion processing on the first image and the first depth map to obtain a fusion result;基于所述融合结果,确定所述第一图像中多个像素的深度预测值。Based on the fusion result, the depth prediction values of multiple pixels in the first image are determined.
- 根据权利要求76至82中任意一项所述的装置,其特征在于,所述更新子模块用于:The device according to any one of claims 76 to 82, wherein the update submodule is configured to:将所述第一图像输入到关联度检测神经网络进行处理,得到所述第一图像中多个像素的关联信息。The first image is input to the correlation detection neural network for processing, and correlation information of multiple pixels in the first image is obtained.
- 根据权利要求72至83中任意一项所述的装置,其特征在于,所述更新子模块用于:The device according to any one of claims 72 to 83, wherein the update submodule is configured to:从所述第一图像中获取所述目标对象的图像;Acquiring an image of the target object from the first image;基于所述目标对象的图像,更新所述第一深度图。Based on the image of the target object, the first depth map is updated.
- 根据权利要求84所述的装置,其特征在于,所述更新子模块用于:The device according to claim 84, wherein the update submodule is configured to:获取所述第一图像中所述目标对象的关键点信息;Acquiring key point information of the target object in the first image;基于所述目标对象的关键点信息,从所述第一图像中获取所述目标对象的图像。Based on the key point information of the target object, an image of the target object is acquired from the first image.
- 根据权利要求85所述的装置,其特征在于,所述更新子模块用于:The device according to claim 85, wherein the update submodule is configured to:对所述第一图像进行目标检测,得到所述目标对象所在区域;Performing target detection on the first image to obtain the area where the target object is located;对所述目标对象所在区域的图像进行关键点检测,得到所述第一图像中所述目标对象的关键点信息。Perform key point detection on the image of the area where the target object is located to obtain key point information of the target object in the first image.
- 根据权利要求72至86中任意一项所述的装置,其特征在于,所述更新子模块用于:The device according to any one of claims 72 to 86, wherein the update submodule is configured to:从所述第一深度图中获取所述目标对象的深度图;Acquiring a depth map of the target object from the first depth map;基于所述第一图像,更新所述目标对象的深度图,得到所述第二深度图。Based on the first image, update the depth map of the target object to obtain the second depth map.
- 根据权利要求72至87中任意一项所述的装置,其特征在于,所述确定子模块用于:The device according to any one of claims 72 to 87, wherein the determining submodule is configured to:将所述第一图像和所述第二深度图输入到活体检测神经网络进行处理,得到所述目标对象的活体检测结果。The first image and the second depth map are input to a living body detection neural network for processing to obtain a living body detection result of the target object.
- 根据权利要求72至88中任意一项所述的装置,其特征在于,所述确定子模块用于:The device according to any one of claims 72 to 88, wherein the determining submodule is configured to:对所述第一图像进行特征提取处理,得到第一特征信息;Performing feature extraction processing on the first image to obtain first feature information;对所述第二深度图进行特征提取处理,得到第二特征信息;Performing feature extraction processing on the second depth map to obtain second feature information;基于所述第一特征信息和所述第二特征信息,确定所述目标对象的活体检测结果。Based on the first feature information and the second feature information, a live detection result of the target object is determined.
- 根据权利要求89所述的装置,其特征在于,所述确定子模块用于:The device according to claim 89, wherein the determining submodule is configured to:对所述第一特征信息和所述第二特征信息进行融合处理,得到第三特征信息;Performing fusion processing on the first feature information and the second feature information to obtain third feature information;基于所述第三特征信息,确定所述目标对象的活体检测结果。Based on the third characteristic information, a living body detection result of the target object is determined.
- 根据权利要求90所述的装置,其特征在于,所述确定子模块用于:The device according to claim 90, wherein the determining submodule is configured to:基于所述第三特征信息,得到所述目标对象为活体的概率;Obtain the probability that the target object is a living body based on the third characteristic information;根据所述目标对象为活体的概率,确定所述目标对象的活体检测结果。Determine the live detection result of the target object according to the probability that the target object is a living body.
- 根据权利要求63至91中任意一项所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 63 to 91, wherein the device further comprises:激活与启动模块,用于响应于人脸识别失败,激活设置于所述车的密码解锁模块以启动密码解锁流程。The activation and activation module is used to activate the password unlocking module provided in the car in response to the face recognition failure to start the password unlocking process.
- 根据权利要求63至92中任意一项所述的装置,其特征在于,所述装置还包括注册模块,所述注册模块用于以下一项或两项:The device according to any one of claims 63 to 92, wherein the device further comprises a registration module, the registration module being used for one or both of the following:根据所述图像采集模组采集的车主的人脸图像进行车主注册;Carrying out vehicle owner registration according to the face image of the vehicle owner collected by the image acquisition module;根据所述车主的终端设备采集的所述车主的人脸图像进行远程注册,并将注册信息发送到所述车上,其中,所述注册信息包括所述车主的人脸图像。Perform remote registration according to the face image of the vehicle owner collected by the terminal device of the vehicle owner, and send registration information to the vehicle, where the registration information includes the face image of the vehicle owner.
- 一种车载人脸解锁系统,其特征在于,包括:存储器、人脸识别模组、图像采集模组和蓝牙模块;所述人脸识别模组分别与所述存储器、所述图像采集模组和所述蓝牙模块连接;所述蓝牙模块包括在与预设标识的蓝牙设备蓝牙配对连接成功或者搜索到所述预设标识的蓝牙设备时唤醒所述人脸识别模组的微处理器和与所述微处理器连接的蓝牙传感器;所述人脸识别模组还设置有用于与车门域控制器连接的通信接口,若人脸识别成功则基于所述通信接口向所述车门域控制器发送用于解锁车门的控制信息。A vehicle-mounted face unlocking system, which is characterized by comprising: a memory, a face recognition module, an image acquisition module, and a Bluetooth module; the face recognition module is connected to the memory, the image acquisition module, and The Bluetooth module is connected; the Bluetooth module includes a microprocessor that wakes up the face recognition module when the Bluetooth pairing connection with a Bluetooth device with a preset identification is successful or when the Bluetooth device with the preset identification is searched for The Bluetooth sensor connected to the microprocessor; the face recognition module is also provided with a communication interface for connecting with the door domain controller, and if the face recognition is successful, the communication interface is used to send the signal to the door domain controller Control information for unlocking the door.
- 根据权利要求94所述的车载人脸解锁系统,其特征在于,所述图像采集模组包括图像传感器和深度传感器。The vehicle-mounted face unlocking system of claim 94, wherein the image acquisition module includes an image sensor and a depth sensor.
- 根据权利要求95所述的车载人脸解锁系统,其特征在于,所述深度传感器包括双目红外传感器,所述双目红外传感器的两个红外摄像头设置在所述图像传感器的摄像头的两侧。The vehicle-mounted face unlocking system according to claim 95, wherein the depth sensor comprises a binocular infrared sensor, and two infrared cameras of the binocular infrared sensor are arranged on both sides of the camera of the image sensor.
- 根据权利要求96所述的车载人脸解锁系统,其特征在于,所述图像采集模组还包括至少一个补光灯,所述至少一个补光灯设置在所述双目红外传感器的红外摄像头和所述图像传感器的摄像头之间,所述至少一个补光灯包括用于所述图像传感器的补光灯和用于所述深度传感器的补光灯中的至少一种。The vehicle face unlocking system according to claim 96, wherein the image acquisition module further comprises at least one supplementary light, and the at least one supplementary light is arranged on the infrared camera and the binocular infrared sensor. Between the cameras of the image sensor, the at least one fill light includes at least one of a fill light for the image sensor and a fill light for the depth sensor.
- 根据权利要求95所述的车载人脸解锁系统,其特征在于,所述图像采集模组还包括激光器,所述激光器设置在所述深度传感器的摄像头和所述图像传感器的摄像头之间。The vehicle-mounted face unlocking system according to claim 95, wherein the image acquisition module further comprises a laser, and the laser is arranged between the camera of the depth sensor and the camera of the image sensor.
- 根据权利要求94至98中任意一项所述的车载人脸解锁系统,其特征在于,所述车载人脸解锁系统还包括:用于解锁车门的密码解锁模块,所述密码解锁模块与所述人脸识别模组连接。The in-vehicle face unlocking system according to any one of claims 94 to 98, wherein the in-vehicle face unlocking system further comprises: a password unlocking module for unlocking a vehicle door, the password unlocking module and the Face recognition module connection.
- 根据权利要求99所述的车载人脸解锁系统,其特征在于,所述密码解锁模块包括触控屏和键盘中的一项或两项。The in-vehicle face unlocking system according to claim 99, wherein the password unlocking module includes one or both of a touch screen and a keyboard.
- 根据权利要求94至100中任意一项所述的车载人脸解锁系统,其特征在于,所述车载人脸解锁系统还包括:电池模组,所述电池模组分别与所述微处理器和所述人脸识别模组连接。The vehicle-mounted face unlocking system according to any one of claims 94 to 100, wherein the vehicle-mounted face unlocking system further comprises: a battery module, and the battery module is connected to the microprocessor and The face recognition module is connected.
- 一种车,其特征在于,所述车包括权利要求94至101中任意一项所述的车载人脸解锁系统,所述车载人脸解锁系统与所述车的车门域控制器连接。A vehicle, characterized in that the vehicle comprises the vehicle-mounted face unlocking system according to any one of claims 94 to 101, and the vehicle-mounted face unlocking system is connected to a door domain controller of the vehicle.
- 根据权利要求102所述的车,其特征在于,所述图像采集模组设置在所述车的室外部。The vehicle of claim 102, wherein the image acquisition module is installed outside the vehicle's exterior.
- 根据权利要求103所述的车,其特征在于,所述图像采集模组设置在以下至少一个位置上:所述车的B柱、至少一个车门、至少一个后视镜。The vehicle according to claim 103, wherein the image acquisition module is arranged at at least one of the following positions: a B-pillar of the vehicle, at least one door, and at least one rearview mirror.
- 根据权利要求102至104中任意一项所述的车,其特征在于,所述人脸识别模组设置在所述车内,所述人脸识别模组经CAN总线与所述车门域控制器连接。The car according to any one of claims 102 to 104, wherein the face recognition module is installed in the car, and the face recognition module communicates with the door domain controller via the CAN bus. connection.
- 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:处理器;processor;用于存储处理器可执行指令的存储器;A memory for storing processor executable instructions;其中,所述处理器被配置为:执行权利要求1至62中任意一项所述的方法。Wherein, the processor is configured to execute the method according to any one of claims 1 to 62.
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至62中任意一项所述的方法。A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions implement the method according to any one of claims 1 to 62 when executed by a processor.
- 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至62中的任一权利要求所述的方法。A computer program, comprising computer-readable code, characterized in that, when the computer-readable code runs in an electronic device, a processor in the electronic device executes for implementing any one of claims 1 to 62 The method of the claims.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021572948A JP2022537923A (en) | 2019-07-01 | 2020-02-26 | VEHICLE DOOR UNLOCK METHOD AND APPARATUS, SYSTEM, VEHICLE, ELECTRONIC DEVICE, AND STORAGE MEDIUM |
KR1020217043021A KR20220016184A (en) | 2019-07-01 | 2020-02-26 | Vehicle door unlocking method and device, system, vehicle, electronic device and storage medium |
KR1020227017334A KR20220070581A (en) | 2019-07-01 | 2020-02-26 | Vehicle door unlocking method and device, system, vehicle, electronic equipment and storage medium |
JP2022059357A JP2022118730A (en) | 2019-07-01 | 2022-03-31 | Vehicle door lock release method and device, system, vehicle, electronic apparatus and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910586845.6 | 2019-07-01 | ||
CN201910586845.6A CN110335389B (en) | 2019-07-01 | 2019-07-01 | Vehicle door unlocking method, vehicle door unlocking device, vehicle door unlocking system, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021000587A1 true WO2021000587A1 (en) | 2021-01-07 |
Family
ID=68143972
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/076713 WO2021000587A1 (en) | 2019-07-01 | 2020-02-26 | Vehicle door unlocking method and device, system, vehicle, electronic equipment and storage medium |
Country Status (4)
Country | Link |
---|---|
JP (2) | JP2022537923A (en) |
KR (2) | KR20220016184A (en) |
CN (1) | CN110335389B (en) |
WO (1) | WO2021000587A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160458A (en) * | 2021-03-31 | 2021-07-23 | 中国地质大学(武汉) | Internet of things intelligent lock based on TOF technology and RFID technology and use method thereof |
CN113408344A (en) * | 2021-05-13 | 2021-09-17 | 深圳市捷顺科技实业股份有限公司 | Three-dimensional face recognition generation method and related device |
CN114463879A (en) * | 2022-01-25 | 2022-05-10 | 杭州涂鸦信息技术有限公司 | Unlocking method, intelligent terminal and computer readable storage medium |
CN114872660A (en) * | 2022-04-18 | 2022-08-09 | 浙江极氪智能科技有限公司 | Face recognition system control method and device, vehicle and storage medium |
CN115240296A (en) * | 2022-05-10 | 2022-10-25 | 深圳绿米联创科技有限公司 | Equipment awakening method, device, equipment and storage medium |
CN115331334A (en) * | 2022-07-13 | 2022-11-11 | 神通科技集团股份有限公司 | Intelligent stand column based on face recognition and Bluetooth unlocking and unlocking method |
CN116805430A (en) * | 2022-12-12 | 2023-09-26 | 安徽国防科技职业学院 | Digital image safety processing system based on big data |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110335389B (en) * | 2019-07-01 | 2021-10-12 | 上海商汤临港智能科技有限公司 | Vehicle door unlocking method, vehicle door unlocking device, vehicle door unlocking system, electronic equipment and storage medium |
CN114937294A (en) * | 2019-10-22 | 2022-08-23 | 上海商汤智能科技有限公司 | Vehicle door control method, vehicle door control device, vehicle door control system, vehicle, electronic equipment and storage medium |
CN113421358B (en) * | 2020-03-03 | 2023-05-09 | 比亚迪股份有限公司 | Lock control system, lock control method and vehicle |
CN111540090A (en) * | 2020-04-29 | 2020-08-14 | 北京市商汤科技开发有限公司 | Method and device for controlling unlocking of vehicle door, vehicle, electronic equipment and storage medium |
CN111932732A (en) * | 2020-08-13 | 2020-11-13 | 四川巧盒物联科技有限公司 | Bluetooth communication-based intelligent lock unlocking method for circulating packaging box |
CN111932727A (en) * | 2020-08-13 | 2020-11-13 | 四川巧盒物联科技有限公司 | Packaging box intelligent lock biological identification unlocking method based on Bluetooth communication |
CN111932725A (en) * | 2020-08-13 | 2020-11-13 | 四川巧盒物联科技有限公司 | Intelligent packing box lock unlocking method based on Bluetooth communication unlocking equipment |
CN111932730A (en) * | 2020-08-13 | 2020-11-13 | 四川巧盒物联科技有限公司 | Bluetooth communication-based remote unlocking method for intelligent lock of circulating packing box |
CN111932724A (en) * | 2020-08-13 | 2020-11-13 | 四川巧盒物联科技有限公司 | Induction type circulating packaging box intelligent lock unlocking method based on Bluetooth communication |
CN111932723A (en) * | 2020-08-13 | 2020-11-13 | 四川巧盒物联科技有限公司 | Bluetooth communication-based quick unlocking method for intelligent lock of circulating packing box |
CN112590706A (en) * | 2020-12-18 | 2021-04-02 | 上海傲硕信息科技有限公司 | Noninductive face recognition vehicle door unlocking system |
CN112684722A (en) * | 2020-12-18 | 2021-04-20 | 上海傲硕信息科技有限公司 | Low-power consumption power supply control circuit |
CN112572349A (en) * | 2020-12-22 | 2021-03-30 | 广州橙行智动汽车科技有限公司 | Vehicle digital key processing method and system |
CN113135161A (en) * | 2021-03-18 | 2021-07-20 | 江西欧迈斯微电子有限公司 | Identification system and identification method |
CN114268380B (en) * | 2021-10-27 | 2024-03-08 | 浙江零跑科技股份有限公司 | Automobile Bluetooth non-inductive entry improvement method based on acoustic wave communication |
JP2023098771A (en) * | 2021-12-29 | 2023-07-11 | Ihi運搬機械株式会社 | Operation method and operation device of mechanical parking device |
JP2024011894A (en) * | 2022-07-15 | 2024-01-25 | ソニーセミコンダクタソリューションズ株式会社 | Information processing device and information processing system |
CN115546939A (en) * | 2022-09-19 | 2022-12-30 | 国网青海省电力公司信息通信公司 | Unlocking mode determination method and device and electronic equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040029211A (en) * | 2002-09-25 | 2004-04-06 | 현대자동차주식회사 | An antitheft device and the control method of automobile |
CN107963057A (en) * | 2017-12-23 | 2018-04-27 | 埃泰克汽车电子(芜湖)有限公司 | A kind of keyless access system based on mobile phone |
CN108846924A (en) * | 2018-05-31 | 2018-11-20 | 上海商汤智能科技有限公司 | Vehicle and car door solution lock control method, device and car door system for unlocking |
CN208207948U (en) * | 2018-05-31 | 2018-12-07 | 上海商汤智能科技有限公司 | vehicle with face unlocking function |
CN109243024A (en) * | 2018-08-29 | 2019-01-18 | 上海交通大学 | A kind of automobile unlocking system and method based on recognition of face |
CN109823306A (en) * | 2019-02-22 | 2019-05-31 | 广东远峰汽车电子有限公司 | Car door unlocking method, device, system and readable storage medium storing program for executing |
CN208954163U (en) * | 2018-08-16 | 2019-06-07 | 深圳嗒程科技有限公司 | A kind of unlocking system based on shared automobile |
CN109895736A (en) * | 2019-02-19 | 2019-06-18 | 汉腾汽车有限公司 | Safe opening door device and safe opening of car door method based on face recognition technology |
CN110335389A (en) * | 2019-07-01 | 2019-10-15 | 上海商汤临港智能科技有限公司 | Car door unlocking method and device, system, vehicle, electronic equipment and storage medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101715139B (en) * | 2009-11-16 | 2011-04-20 | 南京邮电大学 | Multi-mode error code covering method based on complementary covering mode in dimensional images |
CN102595024B (en) * | 2011-12-16 | 2014-10-22 | 飞狐信息技术(天津)有限公司 | Method and device for restoring digital video images |
CN103248906B (en) * | 2013-04-17 | 2015-02-18 | 清华大学深圳研究生院 | Method and system for acquiring depth map of binocular stereo video sequence |
CN103729919A (en) * | 2013-12-10 | 2014-04-16 | 杨伟 | Electronic access control system |
CN105096311A (en) * | 2014-07-01 | 2015-11-25 | 中国科学院科学传播研究中心 | Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit) |
CN105894503B (en) * | 2016-03-30 | 2019-10-01 | 江苏大学 | A kind of restorative procedure of pair of Kinect plant colour and depth detection image |
CN107176060A (en) * | 2017-07-27 | 2017-09-19 | 杭州力谱科技有限公司 | A kind of electric automobile charging pile |
CN107393046B (en) * | 2017-08-03 | 2019-08-06 | 陕西尚品信息科技有限公司 | A kind of method that bluetooth registers system and bluetooth is registered |
CN109121077B (en) * | 2018-08-06 | 2020-09-29 | 刘丽 | Bluetooth work system of garage and work method thereof |
-
2019
- 2019-07-01 CN CN201910586845.6A patent/CN110335389B/en active Active
-
2020
- 2020-02-26 KR KR1020217043021A patent/KR20220016184A/en not_active Application Discontinuation
- 2020-02-26 KR KR1020227017334A patent/KR20220070581A/en not_active Application Discontinuation
- 2020-02-26 JP JP2021572948A patent/JP2022537923A/en not_active Abandoned
- 2020-02-26 WO PCT/CN2020/076713 patent/WO2021000587A1/en active Application Filing
-
2022
- 2022-03-31 JP JP2022059357A patent/JP2022118730A/en not_active Abandoned
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040029211A (en) * | 2002-09-25 | 2004-04-06 | 현대자동차주식회사 | An antitheft device and the control method of automobile |
CN107963057A (en) * | 2017-12-23 | 2018-04-27 | 埃泰克汽车电子(芜湖)有限公司 | A kind of keyless access system based on mobile phone |
CN108846924A (en) * | 2018-05-31 | 2018-11-20 | 上海商汤智能科技有限公司 | Vehicle and car door solution lock control method, device and car door system for unlocking |
CN208207948U (en) * | 2018-05-31 | 2018-12-07 | 上海商汤智能科技有限公司 | vehicle with face unlocking function |
CN208954163U (en) * | 2018-08-16 | 2019-06-07 | 深圳嗒程科技有限公司 | A kind of unlocking system based on shared automobile |
CN109243024A (en) * | 2018-08-29 | 2019-01-18 | 上海交通大学 | A kind of automobile unlocking system and method based on recognition of face |
CN109895736A (en) * | 2019-02-19 | 2019-06-18 | 汉腾汽车有限公司 | Safe opening door device and safe opening of car door method based on face recognition technology |
CN109823306A (en) * | 2019-02-22 | 2019-05-31 | 广东远峰汽车电子有限公司 | Car door unlocking method, device, system and readable storage medium storing program for executing |
CN110335389A (en) * | 2019-07-01 | 2019-10-15 | 上海商汤临港智能科技有限公司 | Car door unlocking method and device, system, vehicle, electronic equipment and storage medium |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160458A (en) * | 2021-03-31 | 2021-07-23 | 中国地质大学(武汉) | Internet of things intelligent lock based on TOF technology and RFID technology and use method thereof |
CN113160458B (en) * | 2021-03-31 | 2024-05-31 | 中国地质大学(武汉) | Internet of things intelligent lock based on TOF technology and RFID technology and use method thereof |
CN113408344A (en) * | 2021-05-13 | 2021-09-17 | 深圳市捷顺科技实业股份有限公司 | Three-dimensional face recognition generation method and related device |
CN114463879A (en) * | 2022-01-25 | 2022-05-10 | 杭州涂鸦信息技术有限公司 | Unlocking method, intelligent terminal and computer readable storage medium |
CN114872660A (en) * | 2022-04-18 | 2022-08-09 | 浙江极氪智能科技有限公司 | Face recognition system control method and device, vehicle and storage medium |
CN115240296A (en) * | 2022-05-10 | 2022-10-25 | 深圳绿米联创科技有限公司 | Equipment awakening method, device, equipment and storage medium |
CN115331334A (en) * | 2022-07-13 | 2022-11-11 | 神通科技集团股份有限公司 | Intelligent stand column based on face recognition and Bluetooth unlocking and unlocking method |
CN116805430A (en) * | 2022-12-12 | 2023-09-26 | 安徽国防科技职业学院 | Digital image safety processing system based on big data |
CN116805430B (en) * | 2022-12-12 | 2024-01-02 | 安徽国防科技职业学院 | Digital image safety processing system based on big data |
Also Published As
Publication number | Publication date |
---|---|
CN110335389A (en) | 2019-10-15 |
JP2022537923A (en) | 2022-08-31 |
KR20220070581A (en) | 2022-05-31 |
JP2022118730A (en) | 2022-08-15 |
KR20220016184A (en) | 2022-02-08 |
CN110335389B (en) | 2021-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021000587A1 (en) | Vehicle door unlocking method and device, system, vehicle, electronic equipment and storage medium | |
TWI785312B (en) | Vehicle door unlocking method and device thereof, vehicle-mounted face unlocking system, vehicle, electronic device and storage medium | |
WO2021077738A1 (en) | Vehicle door control method, apparatus, and system, vehicle, electronic device, and storage medium | |
US20210001810A1 (en) | System, method, and computer program for enabling operation based on user authorization | |
US11196966B2 (en) | Identifying and locating objects by associating video data of the objects with signals identifying wireless devices belonging to the objects | |
US11321575B2 (en) | Method, apparatus and system for liveness detection, electronic device, and storage medium | |
WO2019214201A1 (en) | Live body detection method and apparatus, system, electronic device, and storage medium | |
CA3105190A1 (en) | System and method for identifying and verifying one or more individuals using facial recognition | |
US20160364009A1 (en) | Gesture recognition for wireless audio/video recording and communication devices | |
WO2015179223A1 (en) | Adaptive low-light identification | |
DE102019107582A1 (en) | Electronic device with image pickup source identification and corresponding methods | |
US11659144B1 (en) | Security video data processing systems and methods | |
CN111626086A (en) | Living body detection method, living body detection device, living body detection system, electronic device, and storage medium | |
US20200356647A1 (en) | Electronic device and control method therefor | |
Kumar et al. | Design and Analysis of IOT Based Real Time System for Door Locking/Unlocking Using Face Identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20834966 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021572948 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217043021 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20834966 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20834966 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/09/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20834966 Country of ref document: EP Kind code of ref document: A1 |