WO2020211231A1 - Tof传感器的工作频率的控制方法、装置、设备及介质 - Google Patents

Tof传感器的工作频率的控制方法、装置、设备及介质 Download PDF

Info

Publication number
WO2020211231A1
WO2020211231A1 PCT/CN2019/101624 CN2019101624W WO2020211231A1 WO 2020211231 A1 WO2020211231 A1 WO 2020211231A1 CN 2019101624 W CN2019101624 W CN 2019101624W WO 2020211231 A1 WO2020211231 A1 WO 2020211231A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
value
face area
image frame
target image
Prior art date
Application number
PCT/CN2019/101624
Other languages
English (en)
French (fr)
Inventor
廖声洋
Original Assignee
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京迈格威科技有限公司 filed Critical 北京迈格威科技有限公司
Priority to US17/424,427 priority Critical patent/US20220107398A1/en
Publication of WO2020211231A1 publication Critical patent/WO2020211231A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/322Aspects of commerce using mobile devices [M-devices]
    • G06Q20/3223Realising banking transactions through M-devices
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/493Extracting wanted echo signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of computer technology. Specifically, the present disclosure relates to a method, device, device, and computer-readable storage medium for controlling the operating frequency of a TOF sensor.
  • mobile phone technology applications that are gradually becoming popular include face unlocking, face reshaping, 3D beautification, and 3D lighting.
  • the existing technology has that the working frequency (collection frequency) of TOF sensors cannot be adjusted and cannot be adjusted according to application scenarios (for example, payment scenarios) And poor user experience.
  • the present disclosure proposes a method, device, equipment, and computer-readable storage medium for controlling the working frequency of the TOF sensor to solve the problem of how to dynamically adjust and control the working frequency of the TOF sensor.
  • some embodiments of the present disclosure provide a method for controlling the operating frequency of a time-of-flight TOF sensor, including:
  • the operating frequency of the TOF sensor is adjusted and controlled.
  • determining the feature information of the face area according to the face area and the depth information of the target image frame acquired by the TOF sensor includes: according to the face area and the TOF sensor The acquired depth information of the target image frame determines the local depth information corresponding to each part of the face region; and determines the average depth value of the face region according to the face region and the local depth information , Wherein the characteristic information includes the average depth value.
  • determining the feature information of the face region according to the face region and the depth information of the target image frame acquired by the TOF sensor further includes: according to the target image frame and the face Region, determining the deviation ratio between the center point of the face region and the center point of the target image frame, wherein the characteristic information further includes the deviation ratio.
  • the method before inputting the target image frame into a preset face detection model for face detection, the method further includes: acquiring a plurality of image frames to be processed, and acquiring and determining depth information of the plurality of image frames to be processed through a TOF sensor ;
  • any one of the multiple to-be-processed image frames into the preset face detection model for face detection If a face is detected, use any to-be-processed image as the target image frame and determine the target image The face area in the frame.
  • determining the local depth information corresponding to each part of the face area includes: according to the feature parameters of the face area and the The depth information of the target image frame determines the local depth information corresponding to each part of the face region.
  • determining the deviation ratio between the center point of the face region and the center point of the target image frame according to the target image frame and the face region includes: The characteristic parameter and the characteristic parameter of the target image frame determine the deviation ratio.
  • the feature parameter of the face area includes a first initial value of the face area corresponding to a first direction, a second initial value of the face area corresponding to a second direction, and The first width parameter of the face area corresponding to the first direction, the first height parameter of the face area corresponding to the second direction; or, the feature parameter of the face area includes the person The first start value and the first end value of the face area corresponding to the first direction, the second start value and the second end value of the face area corresponding to the second direction; the characteristic parameter of the target image frame Including the third initial value of the target image frame corresponding to the first direction, the fourth initial value of the target image frame corresponding to the second direction, the value of the target image frame and the The second width parameter corresponding to the first direction, the second height parameter of the target image frame corresponding to the second direction; or, the characteristic parameter of the target image frame includes the difference between the target image frame and the first The third start value and the third end value corresponding to one direction, the fourth start value and the fourth end value of the target image
  • determining the average depth value of the face area according to the face area and local depth information includes:
  • the first parameter is divided by the second parameter to determine the average depth value.
  • adjusting and controlling the operating frequency of the TOF sensor according to the characteristic information and the preset operating frequency of the TOF sensor includes: according to the average depth value and the preset operating frequency of the TOF sensor , To adjust and control the working frequency of the TOF sensor.
  • adjusting and controlling the working frequency of the TOF sensor according to the average depth value and the preset working frequency of the TOF sensor includes:
  • the first width parameter of the face area and the first height parameter of the face area are summed to determine the fourth parameter, or the absolute value of the difference between the first end value and the first starting value Summing the absolute value of the difference between the second end value and the second starting value to determine a fourth parameter;
  • the third parameter is divided by the fourth parameter to determine the updated operating frequency of the TOF sensor, where the updated operating frequency of the TOF sensor is less than the upper threshold of the operating frequency.
  • determining the deviation ratio according to the feature parameters of the face region and the feature parameters of the target image frame includes:
  • the fifth parameter is divided by the sixth parameter to determine the deviation ratio.
  • adjusting and controlling the operating frequency of the TOF sensor according to the characteristic information and the preset operating frequency of the TOF sensor includes: according to the average depth value, the deviation ratio, and the The preset working frequency of the TOF sensor adjusts and controls the working frequency of the TOF sensor.
  • the adjusting and controlling the working frequency of the TOF sensor according to the average depth value, the deviation ratio and the preset working frequency of the TOF sensor includes:
  • the first width parameter of the face area and the first height parameter of the face area are summed to determine a fourth parameter, or the difference between the first end point value and the first starting value Sum the absolute value of the value and the absolute value of the difference between the second end value and the second starting value to determine a fourth parameter;
  • the seventh parameter and the eighth parameter are summed to determine the updated operating frequency of the TOF sensor, where the updated operating frequency is less than the upper limit threshold of the operating frequency of the TOF sensor.
  • the method before determining the feature information of the face area according to the face area and the acquired depth information of the target image frame, the method further includes:
  • the preset application Judging whether the preset application is a payment type application according to the identity of the preset application, wherein the preset application is configured to control the image acquisition device to obtain the target image frame;
  • the preset application is a payment type application, performing the step of determining the feature information of the face area according to the depth information of the face area and the acquired target image frame;
  • the operating frequency of the TOF sensor is not adjusted and controlled.
  • some embodiments of the present disclosure also provide a device for controlling the working frequency of a TOF sensor, including:
  • the first processing module is configured to input the target image frame into a preset face detection model for face detection, and determine the face area in the target image frame;
  • the second processing module is used to determine the feature information of the face area according to the face area and the depth information of the target image frame obtained by the TOF sensor;
  • the third processing module is used to adjust and control the operating frequency of the TOF sensor according to the characteristic information and the preset operating frequency of the TOF sensor.
  • the feature information includes an average depth value of the face region
  • the second processing module is configured to: determine the local depth information corresponding to each part of the face area according to the face area and the depth information of the target image frame acquired by the TOF sensor; The face area and the local depth information, and determine the average depth value;
  • the third processing module is configured to adjust and control the operating frequency of the TOF sensor according to the average depth value and the preset operating frequency of the TOF sensor.
  • the feature information includes an average depth value of the face region, a deviation ratio between the center point of the face region and the center point of the target image frame,
  • the second processing module is configured to: determine the local depth information corresponding to each part of the face area according to the face area and the depth information of the target image frame acquired by the TOF sensor; The face area and the local depth information, determine the average depth value; determine the deviation ratio according to the target image frame and the face area;
  • the third processing module is configured to adjust and control the operating frequency of the TOF sensor according to the deviation ratio, the average depth value and the preset operating frequency of the TOF sensor.
  • some embodiments of the present disclosure also provide an electronic device, including: a processor, a memory, and a bus;
  • Bus used to connect the processor and memory
  • Memory used to store computer programs
  • the processor is configured to execute the method for controlling the operating frequency of the TOF sensor provided by any one of the foregoing embodiments of the present disclosure by calling and running a computer program.
  • some embodiments of the present disclosure also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is used to execute the TOF sensor provided by any of the above-mentioned embodiments of the present disclosure. Control method of working frequency.
  • the method for controlling the working frequency of the TOF sensor includes: inputting a target image frame into a preset face detection model for face detection, and determining a face area in the target image frame; The depth information of the target image frame acquired by the TOF sensor determines the feature information of the face area; according to the feature information and the preset operating frequency of the TOF sensor, the operating frequency of the TOF sensor is adjusted and controlled; in this way, the TOF sensor and the The distance between the face area or the distance between the TOF sensor and the face area and the degree to which the center point of the face area deviates from the center point of the target image frame (ie the deviation ratio) dynamically adjusts the working frequency of the TOF sensor Control, the longer the distance between the TOF sensor and the face area or the greater the distance between the TOF sensor and the face area and the greater the deviation ratio, the operating frequency of the TOF sensor will be increased in real time, thereby improving the security of payment; The closer the distance between the sensor and the face area or if the distance
  • FIG. 1A is a schematic flowchart of a method for controlling the working frequency of a TOF sensor according to an embodiment of the present disclosure
  • FIG. 1B is a schematic flowchart of another method for controlling the working frequency of a TOF sensor according to an embodiment of the present disclosure
  • 1C is a schematic flowchart of another method for controlling the working frequency of a TOF sensor according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another method for controlling the working frequency of a TOF sensor according to an embodiment of the disclosure
  • FIG. 3 is a schematic diagram of a depth image acquired by a TOF sensor provided by an embodiment of the disclosure
  • FIG. 4 is a schematic diagram of a TOF sensor frequency curve corresponding to a non-payment application provided by an embodiment of the disclosure
  • FIG. 5 is a schematic diagram of a TOF sensor frequency curve corresponding to a payment application provided by an embodiment of the disclosure
  • FIG. 6 is a schematic structural diagram of a TOF sensor operating frequency control device according to an embodiment of the disclosure.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • Some embodiments of the present disclosure provide a method for controlling the working frequency of a TOF sensor.
  • a schematic flowchart of the method is shown in FIG. 1A, and the method includes:
  • S10 Input the target image frame into a preset face detection model to perform face detection, and determine a face area in the target image frame.
  • S20 Determine feature information of the face area according to the face area and the depth information of the target image frame acquired by the TOF sensor.
  • S30 Adjust and control the working frequency of the TOF sensor according to the characteristic information and the preset working frequency of the TOF sensor.
  • the feature information includes average depth information with respect to the face area.
  • the method includes:
  • S101 Input a target image frame into a preset face detection model to perform face detection, and determine a face area in the target image frame.
  • S103 Determine an average depth value of the face area according to the face area and local depth information.
  • S104 Adjust and control the working frequency of the TOF sensor according to the average depth value and the preset working frequency of the TOF sensor.
  • step S10 in FIG. 1A includes step S101 in FIG. 1B
  • step S20 in FIG. 1A includes steps S102 and S103 in FIG. 1B
  • step S30 in FIG. 1A includes step S104 in FIG. 1B.
  • the feature information includes the average depth value of the face region and the deviation ratio between the center point of the face region and the center point of the target image frame.
  • the method includes:
  • S101 Input a target image frame into a preset face detection model to perform face detection, and determine a face area in the target image frame.
  • S103 Determine an average depth value of the face area according to the face area and local depth information.
  • S105 Determine the deviation ratio between the center point of the face area and the center point of the target image frame according to the target image frame and the face area.
  • S106 Adjust and control the working frequency of the TOF sensor according to the deviation ratio, the average depth value and the preset working frequency of the TOF sensor.
  • step S10 in FIG. 1A includes step S101 in FIG. 1C
  • step S20 in FIG. 1A includes steps S102, S103, and S105 in FIG. 1C
  • step S30 in FIG. 1A includes step S106 in FIG. 1B.
  • the target image frame is input into a preset face detection model for face detection, and the face area in the target image frame is determined; according to the face area and the depth information of the target image frame obtained by the TOF sensor, Determine the local depth information corresponding to each part of the face area; determine the average depth value of the face area according to the local depth information of the face area and the face area; determine the face area according to the target image frame and the face area
  • the deviation ratio between the center point and the center point of the target image frame according to the average depth value and the preset operating frequency of the TOF sensor or according to the deviation ratio, the average depth value and the preset operating frequency of the TOF sensor, the operating frequency of the TOF sensor Perform adjustment control; in this way, the degree of deviation from the center point of the target image frame according to the distance between the TOF sensor and the face area or the distance between the TOF sensor and the face area and the center point of the face area (ie Deviation ratio) To dynamically adjust and control the operating frequency of the TOF sensor.
  • the TOF sensor The farther the distance between the TOF sensor and the face area or the greater the distance between the TOF sensor and the face area and the greater the deviation ratio, the TOF sensor The working frequency of the TOF sensor is increased, thereby improving the security of payment; the closer the distance between the TOF sensor and the face area or the closer the distance between the TOF sensor and the face area and the smaller the deviation ratio, the real-time operation of the TOF sensor The frequency is reduced, thereby saving power and reducing power consumption; significantly improving the user experience.
  • each part of the face area may be each pixel point in the face area, and the local depth information may include depth information corresponding to each pixel point.
  • each part of the face area may also include the mouth, eyes, nose, eyebrows, etc. in the face, so that the local depth information may include the local depth information corresponding to the mouth and the local depth information corresponding to the eyes.
  • the local depth information corresponding to the mouth can represent the average value of the depth information corresponding to the mouth area, or , Can also represent the depth information corresponding to each pixel in the mouth area.
  • each part of the face area can be divided according to actual conditions, which is not specifically limited in the present disclosure.
  • the embodiments of the present disclosure are described by taking each part of the face area as each pixel point in the face area, that is, the local depth information is the depth information corresponding to each pixel point.
  • the method before inputting the target image frame into a preset face detection model for face detection, that is, before performing step S101, the method further includes:
  • a plurality of image frames to be processed are acquired, and the depth information of each image frame to be processed (that is, the acquired multiple image frames to be processed) is acquired through the TOF sensor.
  • inputting the target image frame into a preset face detection model for face detection, and determining the face area in the target image frame includes:
  • any one of the multiple to-be-processed image frames into the preset face detection model for face detection If a face is detected, use any to-be-processed image as the target image frame and determine the target image The face area in the frame.
  • the target image frame can also be input into the preset face detection model To perform face detection again to determine the face area of the target image frame.
  • inputting the target image frame into the preset face detection model for face detection, and determining the face area in the target image frame may include: performing a human face detection on any one of the multiple image frames to be processed.
  • Face detection for example, the face detection can be performed by a preset face detection model
  • the face detection can be performed by a preset face detection model
  • any image to be processed is used as the target image frame; the target image frame is input into the preset face detection
  • the model performs face detection to determine the face area in the target image frame.
  • the target image frame includes at least one human face.
  • the method for controlling the operating frequency of the TOF sensor may be applied to an electronic system, which may include multiple applications (apps), image acquisition devices, TOF sensors, etc., and multiple applications may include WeChat applications, Alipay applications, and so on.
  • the image acquisition device is used to acquire multiple image frames to be processed.
  • the image acquisition device can be controlled and turned on by a preset application in multiple applications, thereby acquiring multiple image frames to be processed.
  • the image capture device may include a camera or the like.
  • the next image frame to be processed is selected from a plurality of image frames to be processed, and the above face detection is repeated for the next image frame to be processed the process of. It should be noted that if no human face is detected in multiple image frames to be processed, it indicates that there is no target image frame in the multiple image frames to be processed. At this time, it can be determined whether to end the preset application, or through the preset application Control the image acquisition device to acquire the image frame to be processed again.
  • the multiple image frames to be processed may be multiple image frames of different scenes, or multiple image frames of the same scene.
  • the multiple image frames to be processed may be image frames obtained by shooting the same scene at different distances.
  • step S102 according to the face area and the depth information of the target image frame acquired by the TOF sensor, determining the local depth information corresponding to each part of the face area includes: according to the feature parameters of the face area and The depth information of the target image frame determines the local depth information corresponding to each part of the face area.
  • determining the deviation ratio between the center point of the face area and the center point of the target image frame includes: according to the characteristic parameters of the face area And the characteristic parameters of the target image frame to determine the deviation ratio.
  • the feature parameters of the face area include a first initial value of the face area corresponding to the first direction, a second initial value of the face area corresponding to the second direction, and the sum of the face area.
  • a width parameter, a first height parameter of the face area corresponding to the second direction, and depth information of the target image frame determine the local depth information corresponding to each part of the face area.
  • the characteristic parameters of the target image frame include the third initial value of the target image frame corresponding to the first direction, the fourth initial value of the target image frame corresponding to the second direction, and the target image frame corresponding to the first direction.
  • the parameter and the second height parameter determine the deviation ratio.
  • the feature parameters of the face region include a first starting value and a first end value corresponding to the first direction of the face region, and a second starting value corresponding to the second direction of the face region. And the second endpoint value. That is, step S102 includes the first starting value and the first end value corresponding to the first direction of the face area, the second starting value and the second end value and the target corresponding to the second direction of the face area.
  • the depth information of the image frame determines the local depth information corresponding to each part of the face area.
  • the characteristic parameters of the target image frame include the third start value and the third end value of the target image frame corresponding to the first direction, and the fourth start value and the fourth end value of the target image frame corresponding to the second direction.
  • step S105 shown in FIG. 1C may include steps based on the first start value, first end value, second start value, second end value, third start value, third end value, and fourth start value. Value and the fourth endpoint value to determine the deviation ratio.
  • step S105 may also include the first starting value, the second starting value, the first width parameter, the first height parameter, the third starting value, the third ending value, the fourth starting value and the first Four endpoint values to determine the deviation ratio; alternatively, step S105 may also include according to the first starting value, the first ending value, the second starting value, the second ending value, the third starting value, the fourth starting value, the first The second width parameter and the second height parameter determine the deviation ratio.
  • the deviation ratio indicates the degree to which the center point of the face region deviates from the center point of the target image frame.
  • the face area is located in the face area plane coordinate system
  • the first direction of the face area is parallel to the horizontal axis direction of the face area plane coordinate system, that is, the first direction of the face area includes the face area plane coordinate system
  • the second direction of the face area is parallel to the vertical axis direction of the face area plane coordinate system, that is, the second direction of the face area includes the vertical axis direction of the face area plane coordinate system.
  • the target image frame can be expressed as Targ(x10, y10, width10, height10), that is, the third starting value is x10, the third ending value is x11, the second width parameter is width10, and the second height parameter is height10.
  • the third starting value x10 and the fourth starting value y10 are both 0.
  • the resolution of the target image frame is expressed as width10*height10.
  • the target image frame can also be expressed as Targ (x10, y10, x11, y11), that is, the third starting value is x10, the third ending value is x11, the fourth starting value is y10, and the fourth ending value is y11 .
  • x10, y10, width10, height10, x11, and y11 may all be positive numbers.
  • x11 is greater than x10
  • y11 is greater than y10.
  • the face area may be a rectangular area.
  • the face area is represented as Rect(x0,y0,width0,height0
  • the face area corresponds to the first direction
  • the first starting value is x0
  • the second starting value corresponding to the second direction of the face area is y0
  • the first width parameter corresponding to the first direction of the face area is width0
  • the second direction of the face area is The corresponding first height parameter is height0
  • the depth information of the target image frame is Depth(x,y)
  • the range of xi is: (x0, x0+width0)
  • the range of yi is: (y0, y0+height0).
  • the face area is represented as Rect (x0, y0, x1, y1)
  • the face area Rect (x0, y0, x1, y1) and the depth information of the target image frame the difference between the face area and the first
  • the first starting value corresponding to one direction is x0
  • the first end value corresponding to the first direction of the face area is x1
  • the second starting value corresponding to the second direction of the face area is y0
  • the second starting value of the face area corresponds to y0.
  • the second end point value corresponding to the second direction is y1
  • the depth information of the target image frame is Depth(x,y).
  • x0, y0, width0, height0, x1, y1 can all be positive numbers.
  • x1 is greater than x0
  • y1 is greater than y0.
  • the depth information Depth(x,y) of the target image frame is also determined based on the plane coordinate system located in the face area.
  • the face area may also be a circular area or the like.
  • step S103 according to the difference between the face area and the face area Local depth information, to determine the average depth value of the face area, including:
  • the first parameter is divided by the second parameter to determine the average depth value.
  • Determining the average depth value of the face area includes: summing the local depth information to determine the first parameter; according to the absolute value of the difference between the first endpoint value and the first starting value and the second endpoint value and the second starting value The product of the absolute value of the difference between the initial values determines the second parameter; the first parameter is divided by the second parameter to determine the average depth value.
  • determining the second parameter may include: The absolute value of the difference between a starting value determines the first width parameter, and the first height parameter is determined according to the absolute value of the difference between the second end value and the second starting value; the first width parameter is compared with the first The height parameter is multiplied to determine the second parameter.
  • step S104 in the case where the feature parameters of the face area include the first starting value, the second starting value, the first width parameter, and the first height parameter, according to the average depth value and the TOF sensor's prediction Set the working frequency to adjust and control the working frequency of the TOF sensor, including:
  • the third parameter is divided by the fourth parameter to determine the updated operating frequency of the TOF sensor.
  • step S104 in the case where the characteristic parameters of the face area include a first starting value, a first ending value, a second starting value, and a second ending value, according to the average depth value and the TOF sensor
  • the preset working frequency of the TOF sensor can be adjusted and controlled, including:
  • the third parameter is divided by the fourth parameter to determine the updated operating frequency of the TOF sensor.
  • the updated operating frequency of the TOF sensor is less than the upper threshold of the operating frequency of the TOF sensor, for example, the updated operating frequency of the TOF sensor is greater than the lower threshold of the operating frequency of the TOF sensor.
  • the preset operating frequency of the TOF sensor is f0
  • the first width parameter of the face area is width0
  • the first height parameter of the face area is height0
  • the average depth value is avg0.
  • the characteristic parameters in the face area include a first starting value, a second starting value, a first width parameter, and a first height parameter
  • the characteristic parameters of the target image frame include a third starting value
  • the deviation ratio is determined according to the feature parameters of the face region and the feature parameters of the target image frame, including:
  • the first center point parameter of the center point of the face region is determined by summing half of the first width parameter and the first starting value
  • first height parameter and the second starting value for example, sum the half of the first height parameter and the second starting value to determine the second center point parameter of the center point of the face area
  • the second width parameter and the third starting value for example, sum the half of the second width parameter and the third starting value to determine the third center point parameter of the center point of the target image frame
  • the second height parameter and the fourth starting value for example, sum the half of the second height parameter and the fourth starting value to determine the fourth center point parameter of the center point of the target image frame
  • the square of the difference between the first center point parameter and the third center point parameter is summed with the second center point parameter
  • the square root of the difference between the fourth central point parameter and the square root to determine the fifth parameter, where the fifth parameter represents the distance between the central point of the face region and the central point of the target image frame;
  • the square root of the sum of the square of the half of the second width parameter and the square of the half of the second height parameter is used to determine the sixth parameter, where the sixth parameter represents the target image Half of the diagonal length of the frame;
  • the fifth parameter is divided by the sixth parameter to determine the deviation ratio.
  • the deviation ratio dratio dis/dis_pre.
  • the feature parameters in the face area include a first start value, a first end value, a second start value, and a second end value
  • the feature parameters of the target image frame include a third start value
  • the deviation ratio between the center point of the face area and the center point of the target image frame is determined according to the target image frame and the face area, including:
  • the first start value is summed with half of the difference between the first end point value and the first start value to determine the first center point of the center point of the face area parameter;
  • the second start value is summed with half of the difference between the second end point value and the second start value, and the first value of the center point of the face area is determined.
  • the third start value is summed with half of the difference between the third end point value and the third start value, and the first value of the center point of the target image frame is determined.
  • the fourth start value is summed with half of the difference between the fourth end point value and the fourth start value, and the center point of the target image frame is determined.
  • the square of the difference between the first center point parameter and the third center point parameter is summed with the second center point parameter
  • the square root of the difference between the fourth central point parameter and the square root to determine the fifth parameter, where the fifth parameter represents the distance between the central point of the face region and the central point of the target image frame;
  • the third end value, the third start value, the fourth end value, and the fourth start value for example, the square of the half of the absolute value of the difference between the third end value and the third start value is divided into the first The square root of the sum of the square of the half of the absolute value of the difference between the fourth end point value and the fourth starting value to determine the sixth parameter, where the sixth parameter represents half of the diagonal length of the target image frame;
  • the fifth parameter is divided by the sixth parameter to determine the deviation ratio.
  • the deviation ratio dratio dis/dis_pre.
  • the characteristic parameters in the face region include a first starting value, a second starting value, a first width parameter, and a first height parameter
  • the characteristic parameters of the target image frame include a third starting value.
  • the operating frequency of the TOF sensor is adjusted and controlled according to the deviation ratio, the average depth value and the preset operating frequency of the TOF sensor, including:
  • the seventh parameter and the eighth parameter are summed to determine the updated operating frequency of the TOF sensor.
  • the characteristic parameters in the face region include a first starting value, a first ending value, a second starting value, and a second ending value
  • the characteristic parameters of the target image frame include a third starting value.
  • the operating frequency of the TOF sensor is adjusted and controlled according to the average depth value, the deviation ratio and the preset operating frequency of the TOF sensor, including:
  • the seventh parameter and the eighth parameter are summed to determine the updated operating frequency of the TOF sensor.
  • the updated operating frequency is less than the upper threshold of the operating frequency of the TOF sensor, for example, the updated operating frequency of the TOF sensor is greater than the lower threshold of the operating frequency of the TOF sensor.
  • the preset operating frequency of the TOF sensor is f0
  • the first starting value of the face area is x0
  • the first end value of the face area is x1
  • the second starting value of the face area is y0
  • the face The second end point value of the area is y1
  • the first width parameter of the face area is width0
  • the first height parameter of the face area is height0
  • the average depth value is avg0
  • the deviation ratio is dratio.
  • the third parameter is expressed as f0 ⁇ avg0
  • the fourth parameter is expressed as width0+height0
  • the seventh parameter is expressed as f0 ⁇ dratio
  • the eighth parameter is expressed as f0 ⁇ avg0/(width0+height0)+f0 ⁇ dratio
  • the third parameter is expressed as f0 ⁇ avg0
  • the fourth parameter is expressed as
  • the seventh parameter is expressed as f0 ⁇ dratio
  • the eighth parameter is expressed as f0 ⁇ avg0/(
  • ), so the updated operating frequency of the TOF sensor f1 f0 ⁇ avg0 /(
  • the acquisition frequency that is, the updated operating frequency of the TOF sensor
  • the payment Security According to the closer the distance between the TOF sensor and the face area, that is, the smaller the average depth value is avg0, and the smaller the deviation ratio is, the smaller the acquisition frequency, which saves power and reduces power consumption.
  • the method may further include storing the updated operating frequency in an electronic system, so as to implement control of the collection frequency of the TOF sensor and improve the security of the payment process.
  • the method before determining the feature information of the face area according to the face area and the acquired depth information of the target image frame, that is, before step S20 is performed, the method further includes:
  • the identity of the preset application determine whether the preset application is a payment type application
  • the preset application is a payment type application, perform the step of determining the feature information of the face area according to the depth information of the face area and the acquired target image frame;
  • the operating frequency of the TOF sensor is not adjusted and controlled.
  • the feature information of the face area is determined, including:
  • the preset application is a payment type application
  • the feature information of the face area is determined according to the face area and the depth information of the acquired target image frame. Then, the operation of adjusting and controlling the operating frequency of the TOF sensor according to the characteristic information and the preset operating frequency of the TOF sensor is performed. In other words, in this application, if the preset application is a payment type application, the operating frequency of the TOF sensor is adjusted.
  • the working frequency of the TOF sensor is not changed, that is, the TOF sensor works according to the preset working frequency.
  • the preset application is configured to control the image capture device to acquire the target image frame.
  • Some embodiments of the present disclosure provide another method for controlling the working frequency of the TOF sensor.
  • the schematic flow chart of the method is shown in FIG. 2. It should be noted that the example shown in FIG. 2 uses the characteristic information including the average depth The value is an example, as shown in Figure 2, the method includes:
  • S201 Turn on the function of controlling the working frequency of the TOF sensor in the payment scenario.
  • S202 Acquire an ID of a preset application that starts a camera sensor (camera sensor, that is, an image acquisition device).
  • the ID of the preset application is represented by a character string
  • the ID of the preset application is an identity identifier used to distinguish various applications, such as camera applications, WeChat applications, Alipay applications, and certain banking applications.
  • S203 Load a preset parameter table corresponding to the working frequency of the TOF sensor.
  • each parameter in the preset parameter table may include, for example, the preset collection frequency f0 of the TOF sensor, the upper threshold of the working frequency, the lower threshold of the working frequency, the frequency optimization coefficient of the TOF sensor, etc., and the preset collection of the TOF sensor.
  • the frequency f0 is the preset operating frequency f0 of the TOF sensor described above. It should be noted that each parameter in the preset parameter table can also be manually adjusted by the user.
  • S204 The image capture device is turned on to obtain a preview video stream.
  • the image acquisition device is a camera, such as a mobile phone camera.
  • S205 Obtain a preview data frame according to the preview video stream; turn on the TOF sensor to obtain a depth data frame.
  • the preview data frame is the image frame to be processed as described above
  • the depth data frame is the depth image (ie the depth information described above) corresponding to the image frame to be processed
  • FIG. 3 is the depth image obtained by the TOF sensor.
  • S206 Input the preview data frame into the face detection model, and the face detection model performs face detection on the preview data frame, and judges whether there is a face in the preview data frame, if there is a face, perform the operation of S207, if it does not exist For human faces, perform the operation of S213.
  • the face detection model can detect the key points in the face, and the face key point detection includes the following operations: a): Collect a considerable number (for example: 100,000) of face images (base library); b): Accurately label the face key points of the face image in step a) (including but not limited to: face contour points, eye contour points, nose contour points, eyebrow contour points, forehead contour points, upper lip contour points, bottom Lip contour points, etc.); c): divide the accurately labeled data of step b) into training set, validation set, and test set according to a certain proportion; d): use the training set of step c) to detect the face model (neural network) Perform training, and use the verification set to verify the intermediate results obtained by the face detection model detection during the training process (adjust the training parameters of the face detection model in real time).
  • the face area in the preview data frame (that is, the face area described above) is represented as Rect(x0,y0,width0,height0), and the face area is located in the face area plane coordinate system, in the face area In the horizontal axis of the plane coordinate system, the initial value of the face area is x0, and in the vertical axis direction of the plane coordinate system of the face area, the initial value of the face area is y0, and the first width parameter of the face area Is width0, that is, in the horizontal axis direction of the face area plane coordinate system, the width of the face area is width0, and the first height parameter of the face area is height0, that is, in the face area plane coordinate system In the direction of the vertical axis, the height of the face area is height0.
  • S208 Determine whether the preset application in S202 is a payment application. If the preset application is a payment application, perform the operation of S209; if the preset application is not a payment application, perform the operation of S213.
  • the face area Rect(x0,y0,width0,height0 obtain the local depth information Depth(xi,yi) of the corresponding face area from the depth information Depth(x,y) of the current preview data frame ,
  • the range of xi is: (x0, x0+width0)
  • the range of yi is: (y0, y0+height0).
  • S211 Adjust the operating frequency of the TOF sensor in real time according to the average depth value avg0 of the face area and the preset operating frequency f0 of the TOF sensor to determine the updated operating frequency of the TOF sensor.
  • the updated operating frequency will increase, and the security of payment will be improved; according to the closer the distance between the TOF sensor and the face area , That is, the smaller the average depth value is avg0, the lower the updated operating frequency, which saves power and reduces power consumption.
  • S212 Update the updated operating frequency of the TOF sensor (that is, the operating frequency of the TOF sensor adjusted in real time) to the electronic system.
  • S213 Determine whether the preset application ends, if the preset application ends, perform the operation of S214, if the preset application is not ended, perform the operation of S204.
  • the abscissa is the average depth avg0
  • the ordinate is the operating frequency f of the TOF sensor
  • the preset application is a non-payment application, for example, a camera For applications, live broadcast applications, etc.
  • the operating frequency of the TOF sensor will not be adjusted with the average depth avg0.
  • the abscissa is the average depth avg0, and the ordinate is the operating frequency f of the TOF sensor; when the preset application is a payment application, for example, WeChat application, Alipay application, etc., point A is the startup At the moment of payment applications, the working frequency of the TOF sensor is adjusted with the average depth avg0. The larger the average depth avg0, the higher the working frequency of the TOF sensor, which ensures the security of the payment process. When the average depth value avg0 of the abscissa rises to a certain value, the working frequency (collection frequency) of the TOF sensor reaches the upper threshold of the working frequency. At this time, even if the average depth avg0 continues to increase, the working frequency of the TOF sensor will not Change again.
  • the method for controlling the operating frequency of the TOF sensor realizes dynamic adjustment and control of the operating frequency of the TOF sensor. If the distance between the TOF sensor and the face area is farther, the operating frequency of the TOF sensor is increased in real time , Thereby improving the security of payment; if the distance between the TOF sensor and the face area is closer, the working frequency of the TOF sensor will be reduced in real time, thereby saving power and reducing power consumption; significantly improving the user experience.
  • the embodiments of the present disclosure also provide a device for controlling the operating frequency of a TOF sensor.
  • the schematic diagram of the device is shown in FIG. 6.
  • the device 60 for controlling the operating frequency of the TOF sensor includes a first processing module 601 , The second processing module 602 and the third processing module 603.
  • the first processing module 601 is configured to input the target image frame into a preset face detection model for face detection, and determine the face area in the target image frame;
  • the second processing module 602 is configured to determine the feature information of the face area according to the face area and the depth information of the target image frame obtained by the TOF sensor;
  • the third processing module 603 is configured to adjust and control the operating frequency of the TOF sensor according to the characteristic information and the preset operating frequency of the TOF sensor.
  • the first processing module 601 is also used to acquire multiple image frames to be processed, and the TOF sensor is used to acquire and determine the depth information of each image frame to be processed.
  • the first processing module 601 is specifically configured to input any of the multiple to-be-processed image frames into a preset face detection model for face detection. If a face is detected, any The image to be processed is used as the target image frame, and the face area in the target image frame is determined.
  • the feature information includes the average depth value of the face region
  • the second processing module 602 is specifically configured to: determine the difference between the face region and the depth information of the target image frame acquired by the TOF sensor.
  • the local depth information corresponding to each part of the face area; determine the average depth value according to the face area and the local depth information.
  • the third processing module 603 is configured to adjust and control the operating frequency of the TOF sensor according to the average depth value and the preset operating frequency of the TOF sensor.
  • the second processing module 602 is specifically configured to determine the local depth information corresponding to each part of the face area according to the feature parameters of the face area and the depth information of the target image frame.
  • the feature parameters of the face area include a first initial value of the face area corresponding to the first direction, a second initial value of the face area corresponding to the second direction, and the sum of the face area.
  • the second processing module 602 is specifically configured to perform according to the first starting point of the face area corresponding to the first direction. Value, the second initial value of the face area corresponding to the second direction, the first width parameter of the face area corresponding to the first direction, the first height parameter of the face area corresponding to the second direction, and the target image
  • the depth information of the frame determines the local depth information corresponding to each part of the face area.
  • the feature parameters of the face region include a first starting value and a first end value corresponding to the first direction of the face region, and a second starting value corresponding to the second direction of the face region. And the second end value.
  • the second processing module 602 is specifically configured to according to the first start value and the first end value of the face area corresponding to the first direction, and the second end value of the face area corresponding to the second direction.
  • the initial value and the second end value and the depth information of the target image frame determine the local depth information corresponding to each part of the face region.
  • the face area is located in the face area plane coordinate system, and the first direction of the face area is parallel to the horizontal axis direction of the face area plane coordinate system, that is, the first direction of the face area includes the face area plane coordinate system
  • the second direction of the face area is parallel to the longitudinal direction of the face area plane coordinate system, that is, the second direction of the face area includes the longitudinal axis direction of the face area plane coordinate system.
  • the second processing module 602 is also specifically configured to combine the face area Sum the local depth information corresponding to each part of, to determine the first parameter; determine the second parameter according to the product of the first width parameter of the face area and the first height parameter of the face area; combine the first parameter with the second parameter Divide to determine the average depth value.
  • the second processing module 602 is further specifically configured to compare the face area The local depth information corresponding to each part of each part is summed to determine the first parameter; according to the absolute value of the difference between the first end point value and the first start value and the difference between the second end point value and the second start value
  • the second parameter is determined by the product of the absolute values of, and the first parameter is divided by the second parameter to determine the average depth value.
  • the third processing module 603 is specifically configured to perform the calculation according to the average depth value and TOF
  • the product of the preset operating frequency of the sensor determines the third parameter; the first width parameter of the face area and the first height parameter of the face area are summed to determine the fourth parameter; the third parameter is divided by the fourth parameter , To determine the updated operating frequency of the TOF sensor.
  • the third processing module 603 is specifically configured to perform according to the average depth value and the preset value.
  • Set the product of the working frequency to determine the third parameter sum the absolute value of the difference between the first end value and the first start value and the absolute value of the difference between the second end value and the second start value , Determine the fourth parameter; divide the third parameter by the fourth parameter to determine the updated operating frequency of the TOF sensor.
  • the updated operating frequency of the TOF sensor is less than the upper threshold of the operating frequency of the TOF sensor.
  • the feature information includes the average depth value of the face region and the deviation ratio between the center point of the face region and the center point of the target image frame
  • the second processing module 602 is specifically configured to: According to the face area and the depth information of the target image frame obtained by the TOF sensor, determine the local depth information corresponding to each part of the face area; determine the average depth value according to the face area and the local depth information; according to the target image frame and For the face area, determine the deviation ratio.
  • the third processing module 603 is used for adjusting and controlling the working frequency of the TOF sensor according to the deviation ratio, the average depth value and the preset working frequency of the TOF sensor.
  • the second processing module 602 is further specifically configured to determine the deviation ratio according to the feature parameters of the face region and the feature parameters of the target image frame.
  • the characteristic parameters in the face region include a first starting value, a second starting value, a first width parameter, and a first height parameter
  • the characteristic parameters of the target image frame include a third starting value. Value, the fourth starting value, the second width parameter, and the second height parameter.
  • the second processing module 602 is also specifically configured to sum up half of the first width parameter and the first starting value to determine the person The first center point parameter of the center point of the face area; sum the half of the first height parameter and the second starting value to determine the second center point parameter of the center point of the face area; sum the half of the second width parameter
  • the third starting value is summed to determine the third center point parameter of the center point of the target image frame; the fourth center point of the center point of the target image frame is determined by summing half of the second height parameter and the fourth starting value Parameters; the square root of the square of the difference between the first center point parameter and the third center point parameter and the square of the difference between the second center point parameter and the fourth center point parameter to determine the fifth parameter, where the fifth parameter Represents the distance between the center point of the face area and the center point of the target image frame; the square root of the sum of the square of half of the second width parameter and the square of half of the second height parameter is used to determine the sixth parameter, where, The sixth parameter represents half
  • the feature parameters in the face region include a first start value, a first end value, a second start value, and a second end value
  • the feature parameters of the target image frame include the third end value.
  • the second processing module 602 is also specifically configured to calculate the difference between the first ending value and the first starting value.
  • the characteristic parameters in the face area include a first starting value, a second starting value, a first width parameter, and a first height parameter
  • the characteristic parameters of the target image frame include a third starting value, a fourth starting value, and a fourth starting value.
  • the third processing module 603 is specifically configured to determine the third parameter according to the product of the average depth value and the preset operating frequency; and the first width parameter of the face area Sum with the first height parameter of the face area to determine the fourth parameter; determine the seventh parameter according to the product of the deviation ratio and the preset operating frequency; divide the third parameter by the fourth parameter to determine the eighth parameter; The seventh parameter and the eighth parameter are summed to determine the updated operating frequency of the TOF sensor.
  • the characteristic parameters in the face region include a first starting value, a first ending value, a second starting value, and a second ending value
  • the characteristic parameters of the target image frame include a third starting value and a third ending value.
  • the third processing module 603 is specifically configured to determine the third parameter according to the product of the average depth value and the preset operating frequency; combine the first end value with the first start The absolute value of the difference between the values and the absolute value of the difference between the second end value and the second starting value are summed to determine the fourth parameter; the seventh parameter is determined according to the product of the deviation ratio and the preset operating frequency Parameters; divide the third parameter by the fourth parameter to determine the eighth parameter; sum the seventh parameter and the eighth parameter to determine the updated operating frequency of the TOF sensor.
  • the second processing module 602 is also specifically configured to determine whether the preset application is a payment type application according to the identity of the preset application; if the preset application is a payment type application, according to the face area and the acquired The depth information of the target image frame is determined with the feature information of the face area, and then, according to the feature information and the preset operating frequency of the TOF sensor, the operating frequency of the TOF sensor is adjusted and controlled. It should be noted that if the preset application is not a payment type application, the operating frequency of the TOF sensor is not adjusted and controlled.
  • the first processing module 601 is used to perform the operation of step S10 in the method for controlling the operating frequency of the TOF sensor described above
  • the second processing module 602 is used to perform the step S20 in the method for controlling the operating frequency of the TOF sensor described above.
  • the third processing module 603 is used to perform the operation of step S30 in the method for controlling the operating frequency of the TOF sensor described above, regarding the specific operations performed by the first processing module 601, the second processing module 602, and the third processing module 603 You can refer to the above-mentioned embodiment of the method for controlling the working frequency of the TOF sensor, and the repetition is not repeated here.
  • the first processing module 601, the second processing module 602, and/or the third processing module 603 may be dedicated hardware devices to implement the first processing module 601, Some or all of the functions of the second processing module 602 and/or the third processing module 603.
  • the first processing module 601, the second processing module 602, and/or the third processing module 603 may be one circuit board or a combination of multiple circuit boards, which are used to implement the functions described above.
  • the one circuit board or the combination of multiple circuit boards may include: (1) one or more processors; (2) one or more non-transitory computer-readable computers connected to the processors And (3) the processor executable firmware stored in the memory.
  • the first processing module 601, the second processing module 602, and/or the third processing module 603 include codes and programs stored in a memory; the processor can execute the codes and programs to Some or all of the functions of the first processing module 601, the second processing module 602, and/or the third processing module 603 as described above are implemented.
  • the operating frequency control device of the TOF sensor realizes the dynamic adjustment and control of the operating frequency of the TOF sensor, if the distance between the TOF sensor and the face area is farther or if the distance between the TOF sensor and the face area is farther and the ratio deviates If the value is larger, the working frequency of the TOF sensor is increased in real time, thereby improving the security of payment; if the distance between the TOF sensor and the face area is closer or if the distance between the TOF sensor and the face area is closer and the ratio deviates If it is smaller, the working frequency of the TOF sensor is reduced in real time, thereby saving power and reducing power consumption; significantly improving user experience.
  • the embodiments of the present disclosure also provide an electronic device.
  • the structural diagram of the electronic device is shown in FIG. 7.
  • the electronic device 7000 includes at least one processor 7001, a memory 7002, and a bus 7003.
  • the memory 7001 is electrically connected to the memory 7002 through the bus 7003; the memory 7002 is configured to store at least one computer executable instruction, and the processor 7001 is configured to execute the at least one computer executable instruction, thereby executing any of the
  • An embodiment or any optional implementation provides the steps of a method for controlling the working frequency of any TOF sensor.
  • the processor 7001 may be an FPGA (Field-Programmable Gate Array) or other devices with logic processing capabilities, such as MCU (Microcontroller Unit), CPU (Central Process Unit, Central Processing Unit) ).
  • FPGA Field-Programmable Gate Array
  • MCU Microcontroller Unit
  • CPU Central Process Unit
  • Central Processing Unit Central Processing Unit
  • the memory 7002 may include any combination of one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory.
  • Volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include, for example, read only memory (ROM), hard disk, erasable programmable read only memory (EPROM), portable compact disk read only memory (CD-ROM), USB memory, flash memory, etc.
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM portable compact disk read only memory
  • USB memory flash memory
  • flash memory etc.
  • One or more computer-executable instructions may be stored on the computer-readable storage medium, and the processor 7001 may run the computer-executable instructions to implement various functions.
  • the computer-readable storage medium may also store various application programs and various data, as well as various data used and/or generated by the application programs.
  • the electronic equipment realizes dynamic adjustment and control of the working frequency of the TOF sensor. If the distance between the TOF sensor and the face area is farther or if the distance between the TOF sensor and the face area is farther and the deviation ratio is greater, the real-time Increase the working frequency of the TOF sensor to improve the security of payment; if the distance between the TOF sensor and the face area is closer or if the distance between the TOF sensor and the face area is closer and the deviation ratio is smaller, the real-time The working frequency of the TOF sensor is reduced, thereby saving power and reducing power consumption; significantly improving the user experience.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is used to implement any embodiment of the present disclosure when executed by a processor. Or any one of the steps of the method for controlling the working frequency of the TOF sensor provided by any optional implementation.
  • the computer-readable storage medium includes, but is not limited to, any type of disk (including floppy disk, hard disk, optical disk, CD-ROM, and magneto-optical disk), ROM (Read-Only Memory), RAM ( Random Access Memory), EPROM (Erasable Programmable Read-Only Memory, Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), Flash, Magnetic Card or light card. That is, a readable storage medium includes any medium that stores or transmits information in a readable form by a device (for example, a computer).
  • the computer-readable storage medium may be applied to the electronic device provided in any of the foregoing embodiments, for example, it may be a memory in the electronic device.
  • the target image frame into the preset face detection model for face detection determine the face area in the target image frame; determine the characteristics of the face area according to the face area and the depth information of the target image frame obtained by the TOF sensor Information; According to the feature information and the preset operating frequency of the TOF sensor, the operating frequency of the TOF sensor is adjusted and controlled; in this way, the operating frequency of the TOF sensor is dynamically adjusted and controlled.
  • the working frequency of the TOF sensor will be increased in real time, thereby improving the security of payment; if the distance between the TOF sensor and the face area is greater Near or if the distance between the TOF sensor and the face area is closer and the deviation ratio is smaller, the operating frequency of the TOF sensor is reduced in real time, thereby saving power and reducing power consumption; significantly improving user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Accounting & Taxation (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Electromagnetism (AREA)
  • Finance (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种TOF传感器的工作频率的控制方法、装置、设备及计算机可读存储介质,该方法包括:将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域(S10);根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息(S20);根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制(S30)。该方法实现了对TOF传感器的工作频率进行动态调节控制,显著地提升了用户体验。

Description

TOF传感器的工作频率的控制方法、装置、设备及介质
本公开要求于2019年04月18日递交的中国专利申请第201910313354.4号的优先权,在此全文引用上述中国专利申请公开的内容以作为本公开的一部分。
技术领域
本公开涉及计算机技术领域,具体而言,本公开涉及一种TOF传感器的工作频率的控制方法、装置、设备及计算机可读存储介质。
背景技术
随着科学技术的发展和技术产业化应用水平的提升,手机的性能越来越好、硬件配置已经越来越完备。但同时,随着手机市场竞争越来越激烈,拼硬件配置已经不能吸引到更多的电子消费者,所以,大部分的手机厂商都在追求手机产品的差异化功能规划、设计、营销等。如正逐步流行的手机技术应用包括人脸解锁、人脸重塑、3D美颜、3D打光等。
对于支付场景的TOF(Time of flight,飞行时间)传感器的频率控制这个应用场景来说,现有技术存在TOF传感器的工作频率(采集频率)不可调节、不能按照应用场景(例如支付场景)进行调节和用户体验较差等问题。
发明内容
本公开针对现有的方式的缺点,提出一种TOF传感器的工作频率的控制方法、装置、设备及计算机可读存储介质,用以解决如何实现对TOF传感器的工作频率进行动态调节控制的问题。
第一方面,本公开的一些实施例提供了一种飞行时间TOF传感器的工作频率的控制方法,包括:
将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域;
根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息;
根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率 进行调节控制。
可选地,根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定所述人脸区域的特征信息包括:根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的局部深度信息;根据所述人脸区域和所述局部深度信息,确定所述人脸区域的平均深度值,其中,所述特征信息包括所述平均深度值。
可选地,根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定所述人脸区域的特征信息还包括:根据所述目标图像帧和所述人脸区域,确定所述人脸区域的中心点与所述目标图像帧的中心点之间的偏离比率,其中,所述特征信息还包括所述偏离比率。
可选地,将目标图像帧输入预设的人脸检测模型进行人脸检测之前,该方法还包括:获取多张待处理图像帧,并通过TOF传感器获取确定多张待处理图像帧的深度信息;
将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域,包括:
将多张待处理图像帧中的任一待处理图像帧输入预设的人脸检测模型进行人脸检测,若检测到人脸,则将任一待处理图像作为目标图像帧,并确定目标图像帧中的人脸区域。
可选地,根据人脸区域和由述TOF传感器获取的目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息,包括:根据所述人脸区域的特征参数和所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的所述局部深度信息。
可选地,根据所述目标图像帧和所述人脸区域,确定所述人脸区域的中心点与所述目标图像帧的中心点之间的偏离比率,包括:根据所述人脸区域的特征参数和所述目标图像帧的特征参数,确定所述偏离比率。
可选地,所述人脸区域的特征参数包括所述人脸区域的与第一方向对应的第一起始值、所述人脸区域的与第二方向对应的第二起始值、所述人脸区域的与所述第一方向对应的第一宽度参数、所述人脸区域的与所述第二方向对应的第一高度参数;或者,所述人脸区域的特征参数包括所述人脸区域的与第一方向对应的第一起始值和第一终点值、所述人脸区域的与第二方向对应的第二起始值和第二终点值;所述目标图像帧的特征参数包括所述目标图像帧的与所述 第一方向对应的第三起始值、所述目标图像帧的与所述第二方向对应的第四起始值、所述目标图像帧的与所述第一方向对应的第二宽度参数、所述目标图像帧的与所述第二方向对应的第二高度参数;或者,所述目标图像帧的特征参数包括所述目标图像帧的与所述第一方向对应的第三起始值和第三终点值、所述目标图像帧的与所述第二方向对应的第四起始值和第四终点值;其中,人脸区域位于人脸区域平面坐标系中,人脸区域的第一方向平行于人脸区域平面坐标系的横轴方向,人脸区域的第二方向平行于人脸区域平面坐标系的纵轴方向。
可选地,根据人脸区域和局部深度信息,确定人脸区域的平均深度值,包括:
将局部深度信息求和,确定第一参数;
根据人脸区域的第一宽度参数与人脸区域的第一高度参数的乘积,确定第二参数,或者,根据所述第一终点值和所述第一起始值之间的差值的绝对值和所述第二终点值和所述第二起始值之间的差值的绝对值的乘积,确定第二参数;
将第一参数与第二参数相除,确定平均深度值。
可选地,根据所述特征信息和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:根据所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
可选地,根据平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制,包括:
根据平均深度值和TOF传感器的预设工作频率的乘积,确定第三参数;
将人脸区域的第一宽度参数与人脸区域的第一高度参数求和,确定第四参数,或者,将所述第一终点值和所述第一起始值之间的差值的绝对值和所述第二终点值和所述第二起始值之间的差值的绝对值求和,确定第四参数;
将第三参数与第四参数相除,确定TOF传感器的更新后的工作频率,其中,TOF传感器的更新后的工作频率小于工作频率的上限阈值。
可选地,根据所述人脸区域的特征参数和所述目标图像帧的特征参数,确定所述偏离比率,包括:
根据所述第一宽度参数和所述第一起始值,确定所述人脸区域的中心点的第一中心点参数,或者,根据所述第一终点值和所述第一起始值,确定所述人脸区域的中心点的第一中心点参数;
根据所述第一高度参数和所述第二起始值,确定所述人脸区域的中心点的 第二中心点参数,或者,根据所述第二终点值和所述第二起始值,确定所述人脸区域的中心点的第二中心点参数;
根据所述第二宽度参数和所述第三起始值,确定所述目标图像帧的中心点的第三中心点参数,或者,根据所述第三终点值和所述第三起始值,确定所述目标图像帧的中心点的第三中心点参数;
根据所述第二高度参数和所述第四起始值,确定所述目标图像帧的中心点的第四中心点参数,或者,根据所述第四终点值和所述第四起始值,确定所述目标图像帧的中心点的第四中心点参数;
根据所述第一中心点参数、所述第三中心点参数、所述第二中心点参数、所述第四中心点参数,确定第五参数,其中,所述第五参数表示所述人脸区域的中心点和所述目标图像帧的中心点之间的距离;
根据所述第二宽度参数和所述第二高度参数,确定第六参数,或者,根据所述第三终点值、所述第三起始值、所述第四终点值和所述第四起始值,确定第六参数,其中,所述第六参数表示所述目标图像帧的对角线长度的一半;
将所述第五参数与所述第六参数相除,确定所述偏离比率。
可选地,根据所述特征信息和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:所述根据所述平均深度值、所述偏离比率和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
可选地,所述根据所述平均深度值、所述偏离比率和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:
根据所述平均深度值和所述预设工作频率的乘积,确定第三参数;
将所述人脸区域的第一宽度参数与所述人脸区域的第一高度参数求和,确定第四参数,或者,将所述第一终点值和所述第一起始值之间的差值的绝对值和所述第二终点值和所述第二起始值之间的差值的绝对值求和,确定第四参数;
根据所述偏离比率和所述预设工作频率的乘积,确定第七参数;
将所述第三参数与所述第四参数相除,确定第八参数;
将所述第七参数和所述第八参数求和,确定所述TOF传感器的更新后的工作频率,其中,所述更新后的工作频率小于所述TOF传感器的工作频率的上限阈值。
可选地,在根据人脸区域和获取的目标图像帧的深度信息,确定与人脸区 域的特征信息之前,该方法还包括:
根据预设应用的身份标识,判断预设应用是否是支付类型的应用,其中,所述预设应用被配置为控制图像采集装置获取所述目标图像帧;
若所述预设应用是支付类型的应用,则执行根据人脸区域和已获取的目标图像帧的深度信息,确定与人脸区域的特征信息的步骤;
若预设应用不是支付类型的应用,则不对所述TOF传感器的工作频率进行调节控制。
第二方面,本公开的一些实施例还提供了一种TOF传感器的工作频率的控制装置,包括:
第一处理模块,用于将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域;
第二处理模块,用于根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息;
第三处理模块,用于根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
可选地,所述特征信息包括所述人脸区域的平均深度值,
所述第二处理模块用于:根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的局部深度信息;根据所述人脸区域和所述局部深度信息,确定所述平均深度值;
所述第三处理模块用于:根据所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
可选地,所述特征信息包括所述人脸区域的平均深度值、所述人脸区域的中心点与所述目标图像帧的中心点之间的偏离比率,
所述第二处理模块用于:根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的局部深度信息;根据所述人脸区域和所述局部深度信息,确定所述平均深度值;根据所述目标图像帧和所述人脸区域,确定所述偏离比率;
所述第三处理模块用于:根据所述偏离比率、所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
第三方面,本公开的一些实施例还提供了一种电子设备,包括:处理器、存储器和总线;
总线,用于连接处理器和存储器;
存储器,用于存储计算机程序;
处理器,用于通过调用并运行计算机程序,执行本公开的上述任一实施例提供的TOF传感器的工作频率的控制方法。
第四方面,本公开的一些实施例还提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被用于执行本公开的上述任一实施例提供的TOF传感器的工作频率的控制方法。
本公开实施例提供的技术方案,至少具有如下有益效果:
本公开的一些实施例提供的TOF传感器的工作频率的控制方法包括:将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域;根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息;根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制;如此,实现了根据TOF传感器与人脸区域之间的距离或根据TOF传感器与人脸区域之间的距离和人脸区域的中心点的偏离目标图像帧的中心点的程度(即偏离比率)对TOF传感器的工作频率进行动态调节控制,TOF传感器与人脸区域之间距离越远或若TOF传感器与人脸区域之间的距离越远且偏离比率越大,实时将TOF传感器的工作频率提高,从而提高支付的安全性;TOF传感器与人脸区域之间距离越近或若TOF传感器与人脸区域之间的距离越近且偏离比率越小,实时将TOF传感器的工作频率降低,从而节省电量,降低功耗;显著地提升了用户体验。
本公开附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍。
图1A为本公开的一实施例提供的一种TOF传感器的工作频率的控制方法的流程示意图;
图1B为本公开的一实施例提供的又一种TOF传感器的工作频率的控制方法的流程示意图;
图1C为本公开的一实施例提供的另一种TOF传感器的工作频率的控制方 法的流程示意图;
图2为本公开的一实施例提供的另一种TOF传感器的工作频率的控制方法的流程示意图;
图3为本公开的一实施例提供的TOF传感器获取的深度图像示意图;
图4为本公开的一实施例提供的非支付类应用对应的TOF传感器频率曲线示意图;
图5为本公开的一实施例提供的支付类应用对应的TOF传感器频率曲线示意图;
图6为本公开的一实施例提供的一种TOF传感器的工作频率的控制装置的结构示意图;
图7为本公开的一实施例提供的一种电子设备的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本公开,而不能解释为对本发明的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本公开的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解 决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
本公开的一些实施例中提供了一种TOF传感器的工作频率的控制方法,该方法的流程示意图如图1A所示,该方法包括:
S10,将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域。
S20,根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息。
S30,根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
例如,在一些实施例中,特征信息包括与人脸区域的平均深度信息,在此情况下,如图1B所示,该方法包括:
S101,将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域。
S102,根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
S103,根据人脸区域和局部深度信息,确定人脸区域的平均深度值。
S104,根据平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
也就是说,图1A中的步骤S10包括图1B中的步骤S101,图1A中的步骤S20包括图1B中的步骤S102和S103,图1A中的步骤S30包括图1B中的步骤S104。
例如,在另一些实施例中,特征信息包括与人脸区域的平均深度值和人脸区域的中心点与目标图像帧的中心点之间的偏离比率,在此情况下,如图1C所示,该方法包括:
S101,将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域。
S102,根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
S103,根据人脸区域和局部深度信息,确定人脸区域的平均深度值。
S105,根据目标图像帧和人脸区域,确定人脸区域的中心点与目标图像帧的中心点之间的偏离比率。
S106,根据偏离比率、平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
也就是说,图1A中的步骤S10包括图1C中的步骤S101,图1A中的步骤S20包括图1C中的步骤S102、S103和S105,图1A中的步骤S30包括图1B中的步骤S106。
本公开实施例中,将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域;根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息;根据人脸区域和人脸区域的局部深度信息,确定人脸区域的平均深度值;根据目标图像帧和人脸区域,确定人脸区域的中心点与目标图像帧的中心点之间的偏离比率;根据平均深度值和TOF传感器的预设工作频率或者根据偏离比率、平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制;如此,实现了根据TOF传感器与人脸区域之间的距离或者根据TOF传感器与人脸区域之间的距离和人脸区域的中心点的偏离目标图像帧的中心点的程度(即偏离比率)对TOF传感器的工作频率进行动态调节控制,TOF传感器与人脸区域之间的距离越远或若TOF传感器与人脸区域之间的距离越远且偏离比率越大,实时将TOF传感器的工作频率提高,从而提高支付的安全性;TOF传感器与人脸区域之间的距离越近或若TOF传感器与人脸区域之间的距离越近且偏离比率越小,实时将TOF传感器的工作频率降低,从而节省电量,降低功耗;显著地提升了用户体验。
可选地,人脸区域的各个局部可以为人脸区域中的各个像素点,局部深度信息可以包括与各个像素点对应的深度信息。或者,人脸区域的各个局部也可以包括人脸中的嘴部、眼部、鼻子、眉毛等,从而,局部深度信息可以包括与嘴部对应的局部深度信息、与眼部对应的局部深度信息、与鼻子对应的局部深度信息、与眉毛对应的局部深度信息等多个深度信息,此时,例如,与嘴部对应的局部深度信息可以表示该嘴部区域对应的深度信息的平均值,或者,也可以表示该嘴部区域中的各个像素点对应的深度信息。需要说明的是,人脸区域的各个局部可以根据实际情况进行划分,本公开对此不作具体限定。下面以人脸区域的各个局部为人脸区域中的各个像素点,即局部深度信息为与各个像素 点对应的深度信息为例说明本公开的实施例。
可选地,将目标图像帧输入预设的人脸检测模型进行人脸检测之前,即在执行步骤S101之前,该方法还包括:
获取多张待处理图像帧,并通过TOF传感器获取确定各个待处理图像帧(即该获取的多张待处理图像帧)的深度信息。
可选地,将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域,包括:
将多张待处理图像帧中的任一待处理图像帧输入预设的人脸检测模型进行人脸检测,若检测到人脸,则将任一待处理图像作为目标图像帧,并确定目标图像帧中的人脸区域。
需要说明的是,当对待处理图像帧进行人脸检测时并未确定待处理图像帧的人脸区域时,则在确定目标图像帧后,还可以将目标图像帧输入预设的人脸检测模型以再次进行人脸检测,以确定目标图像帧的人脸区域。由此,将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域,可以包括:对多张待处理图像帧中的任一待处理图像帧进行人脸检测(例如,该人脸检测可以由预设的人脸检测模型执行),若检测到人脸,则将任一待处理图像作为目标图像帧;将目标图像帧输入预设的人脸检测模型进行人脸检测,以确定目标图像帧中的人脸区域。
例如,目标图像帧中至少包括一张人脸。
例如,TOF传感器的工作频率的控制方法可以应用于电子系统中,该电子系统可以包括多个应用(app)、图像采集装置和TOF传感器等,多个应用可以包括微信应用、支付宝应用等。图像采集装置用于获取多张待处理图像帧,例如,图像采集装置可以由多个应用中的预设应用控制而开启,从而获取多张待处理图像帧。
例如,图像采集装置可以包括摄像头等。
例如,若在任一待处理图像帧中没有检测到人脸,则从多张待处理图像帧中选择下一张待处理图像帧,并对该下一张待处理图像帧重复进行上述人脸检测的过程。需要说明的是,若多张待处理图像帧均没有检测到人脸,则表明多张待处理图像帧中没有目标图像帧,此时,可以判断是否结束预设应用,或者,通过预设应用控制图像采集装置再次获取待处理图像帧。
例如,多张待处理图像帧可以为不同场景的多张图像帧,也可以为同一场 景的多张图像帧。例如,多张待处理图像帧可以为在不同距离处对同一场景进行拍摄而获取的图像帧。
可选地,在步骤S102中,根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息,包括:根据人脸区域的特征参数和目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
例如,在图1C所示的步骤S105中,根据目标图像帧和人脸区域,确定人脸区域的中心点与目标图像帧的中心点之间的偏离比率,包括:根据人脸区域的特征参数和目标图像帧的特征参数,确定偏离比率。
例如,在一些示例中,人脸区域的特征参数包括人脸区域的与第一方向对应的第一起始值、人脸区域的与第二方向对应的第二起始值、人脸区域的与第一方向对应的第一宽度参数、人脸区域的与第二方向对应的第一高度参数。也就是说,步骤S102包括根据人脸区域的与第一方向对应的第一起始值、人脸区域的与第二方向对应的第二起始值、人脸区域的与第一方向对应的第一宽度参数、人脸区域的与第二方向对应的第一高度参数和目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
例如,目标图像帧的特征参数包括目标图像帧的与第一方向对应的第三起始值、目标图像帧的与第二方向对应的第四起始值、目标图像帧的与第一方向对应的第二宽度参数、目标图像帧的与第二方向对应的第二高度参数。也就是说,图1C所示的步骤S105可以包括根据第一起始值、第二起始值、第一宽度参数、第一高度参数、第三起始值、第四起始值、第二宽度参数和第二高度参数,确定偏离比率。
例如,在另一些示例中,人脸区域的特征参数包括人脸区域的与第一方向对应的第一起始值和第一终点值、人脸区域的与第二方向对应的第二起始值和第二终点值。也就是说,步骤S102包括根据人脸区域的与第一方向对应的第一起始值和第一终点值、人脸区域的与第二方向对应的第二起始值和第二终点值和目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
例如,目标图像帧的特征参数包括目标图像帧的与第一方向对应的第三起始值和第三终点值、目标图像帧的与第二方向对应的第四起始值和第四终点值。也就是说,图1C所示的步骤S105可以包括根据第一起始值、第一终点值、第二起始值、第二终点值、第三起始值、第三终点值、第四起始值和第四终点值, 确定偏离比率。
需要说明的是,步骤S105也可以包括根据第一起始值、第二起始值、第一宽度参数、第一高度参数、第三起始值、第三终点值、第四起始值和第四终点值,确定偏离比率;或者,步骤S105也可以包括根据第一起始值、第一终点值、第二起始值、第二终点值、第三起始值、第四起始值、第二宽度参数和第二高度参数,确定偏离比率。
例如,偏离比率表示人脸区域的中心点偏离目标图像帧的中心点的程度。
例如,人脸区域位于人脸区域平面坐标系中,人脸区域的第一方向平行于人脸区域平面坐标系的横轴方向,即人脸区域的第一方向包括人脸区域平面坐标系的横轴方向,人脸区域的第二方向平行于人脸区域平面坐标系的纵轴方向,即人脸区域的第二方向包括人脸区域平面坐标系的纵轴方向。
例如,目标图像帧可以表示为Targ(x10,y10,width10,height10),即第三起始值为x10,第三终点值为x11,第二宽度参数为width10,第二高度参数为height10。当目标图像帧的起始点为人脸区域平面坐标系的原点时,则第三起始值x10和第四起始值y10均为0,此时,目标图像帧的分辨率表示为width10*height10。
例如,目标图像帧也可以表示为Targ(x10,y10,x11,y11),即第三起始值为x10,第三终点值为x11,第四起始值为y10,第四终点值为y11。可选地,根据第三起始值x10和第三终点值x11可以确定目标图像帧的第二宽度参数width10,其中,width10=|x11-x10|,也就是说,第二宽度参数width10可以为第三起始值x10和第三终点值x11之间的差值的绝对值。根据第四起始值y10和第四终点值y11可以确定目标图像帧的第二高度参数为height10,其中,height10=|y11-y10|,也就是说,第二高度参数height10可以为第四起始值y10和第四终点值y11之间的差值的绝对值。
例如,x10、y10、width10、height10、x11、y11可以均为正数。例如,在一示例中,x11大于x10,y11大于y10。
可选地,人脸区域可以为矩形区域。当人脸区域表示为Rect(x0,y0,width0,height0),则依据人脸区域Rect(x0,y0,width0,height0)和目标图像帧的深度信息,人脸区域的与第一方向对应的第一起始值为x0,人脸区域的与第二方向对应的第二起始值为y0,人脸区域的与第一方向对应的第一宽度参数为width0,人脸区域的与第二方向对应的第一高度参数为height0,目标图像帧的深度信息为Depth(x,y),从当前的目标图像帧的深度信息Depth(x,y)中取出 对应人脸区域的局部深度信息Depth(xi,yi),xi范围为:(x0,x0+width0),yi范围为:(y0,y0+height0)。
可选地,当人脸区域表示为Rect(x0,y0,x1,y1),则依据人脸区域Rect(x0,y0,x1,y1)和目标图像帧的深度信息,人脸区域的与第一方向对应的第一起始值为x0,人脸区域的与第一方向对应的第一终点值为x1,人脸区域的与第二方向对应的第二起始值为y0,人脸区域的与第二方向对应的第二终点值为y1,目标图像帧的深度信息为Depth(x,y),从当前的目标图像帧的深度信息Depth(x,y)中取出对应人脸区域的局部深度信息Depth(xi,yi),xi范围为:(x0,x1),yi范围为:(y0,y1)。
例如,x0、y0、width0、height0、x1、y1可以均为正数。例如,在一示例中,x1大于x0,y1大于y0。
可选地,根据第一起始值x0和第一终点值x1可以确定人脸区域的第一宽度参数width0,其中,width0=|x1-x0|,也就是说,第一宽度参数width0可以为第一起始值x0和第一终点值x1之间的差值的绝对值。根据第二起始值y0和第二终点值y1可以确定人脸区域的第一高度参数为height0,其中,height0=|y1-y0|,也就是说,第一高度参数height0可以为第二起始值y0和第二终点值y1之间的差值的绝对值。
需要说明的是,目标图像帧的深度信息Depth(x,y)也基于位于人脸区域平面坐标系确定。人脸区域还可以为圆形区域等。
可选地,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数的情况下,在步骤S103中,根据人脸区域和人脸区域的局部深度信息,确定人脸区域的平均深度值,包括:
将与人脸区域的各个局部对应的局部深度信息求和,确定第一参数;
根据人脸区域的第一宽度参数与人脸区域的第一高度参数的乘积,确定第二参数;
将第一参数与第二参数相除,确定平均深度值。
可选地,计算人脸区域的局部深度信息Depth(xi,yi)对应的各点的总深度信息sum,根据总深度信息sum、人脸区域的第一宽度参数width0、人脸区域的第一高度参数height0计算平均深度值avg0,例如,平均深度值avg0=sum/(width0×height0)。
可选地,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始 值和第二终点值的情况下,在步骤S103中,根据人脸区域和局部深度信息,确定人脸区域的平均深度值,包括:将局部深度信息求和,确定第一参数;根据第一终点值和第一起始值之间的差值的绝对值和第二终点值和第二起始值之间的差值的绝对值的乘积,确定第二参数;将第一参数与所述第二参数相除,确定平均深度值。
可选地,计算人脸区域的局部深度信息Depth(xi,yi)对应的各点的总深度信息sum,根据总深度信息sum、人脸区域的第一终点值x0、第一起始值x1、第二终点值y0和第二起始值y1,计算平均深度值avg0,例如,平均深度值avg0=sum/(|x1-x0|×|y1-y0|)。
例如,根据第一终点值和第一起始值之间的差值和第二终点值和第二起始值之间的差值的乘积,确定第二参数可以包括:根据第一终点值和第一起始值之间的差值的绝对值确定第一宽度参数,根据第二终点值和第二起始值之间的差值的绝对值确定第一高度参数;将第一宽度参数与第一高度参数相乘以确定第二参数。
可选地,在步骤S104中,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数的情况下,根据平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制,包括:
根据平均深度值和TOF传感器的预设工作频率的乘积,确定第三参数;
将人脸区域的第一宽度参数与人脸区域的第一高度参数求和,确定第四参数;
将第三参数与第四参数相除,确定TOF传感器的更新后的工作频率。
可选地,在步骤S104中,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值的情况下,根据平均深度值和所述TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制,包括:
根据平均深度值和预设工作频率的乘积,确定第三参数;
将第一终点值和第一起始值之间的差值的绝对值和第二终点值和第二起始值之间的差值的绝对值求和,确定第四参数;
将第三参数与第四参数相除,确定TOF传感器的更新后的工作频率。
例如,TOF传感器的更新后的工作频率小于TOF传感器的工作频率的上限阈值,例如,TOF传感器的更新后的工作频率大于TOF传感器的工作频率的下限阈值。
可选地,TOF传感器的更新后的工作频率f1=f0×avg0/(width0+height0),或者,TOF传感器的更新后的工作频率f1=f0×avg0/(|x1-x0|+|y1-y0|),TOF传感器的预设工作频率为f0,人脸区域的第一宽度参数为width0,人脸区域的第一高度参数为height0,平均深度值为avg0。根据TOF传感器与人脸区域之间的距离越远,即平均深度值为avg0越大,则采集频率(即TOF传感器的更新后的工作频率)提高,提高支付的安全性;根据TOF传感器与人脸区域之间的距离越近,即平均深度值为avg0越小,则采集频率降低,节省电量,降低功耗。
例如,在步骤S105中,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数,且目标图像帧的特征参数包括第三起始值、第四起始值、第二宽度参数、第二高度参数的情况下,根据人脸区域的特征参数和目标图像帧的特征参数,确定偏离比率,包括:
根据第一宽度参数和第一起始值,例如,将第一宽度参数的一半和第一起始值求和,确定人脸区域的中心点的第一中心点参数;
根据第一高度参数和第二起始值,例如,将第一高度参数的一半和第二起始值求和,确定人脸区域的中心点的第二中心点参数;
根据第二宽度参数和第三起始值,例如,将第二宽度参数的一半和第三起始值求和,确定目标图像帧的中心点的第三中心点参数;
根据第二高度参数和第四起始值,例如,将第二高度参数的一半和第四起始值求和,确定目标图像帧的中心点的第四中心点参数;
根据第一中心点参数、第三中心点参数、第二中心点参数、第四中心点参数,例如,将第一中心点参数和第三中心点参数之差的平方和第二中心点参数和第四中心点参数之差的平方的和开平方,以确定第五参数,其中,第五参数表示人脸区域的中心点和目标图像帧的中心点之间的距离;
根据第二宽度参数和第二高度参数,例如,将第二宽度参数的一半的平方和第二高度参数的一半的平方的和开平方,以确定第六参数,其中,第六参数表示目标图像帧的对角线长度的一半;
将第五参数与第六参数相除,确定偏离比率。
例如,第一中心点参数cx1表示为cx1=x0+width0/2,第二中心点参数cy1表示为cy1=y0+height0/2,第三中心点参数cx2表示为cx2=x10+(width10)/2,第四中心点参数cy2表示为cy2=y10+(height10)/2。第五参数dis表示为dis=sqrt((cx2-cx1)×(cx2-cx1)+(cy2-cy1)×(cy2-cy1)),第六参数dis_pre表示为 dis_pre=sqrt((width10)/2×(width10)/2+(height10)/2×(height10)/2)。偏离比率dratio=dis/dis_pre。
例如,在步骤S105中,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值,且目标图像帧的特征参数包括第三起始值、第三终点值、第四起始值、第四终点值的情况下,根据目标图像帧和人脸区域,确定人脸区域的中心点与目标图像帧的中心点之间的偏离比率,包括:
根据第一终点值和第一起始值,例如,将第一终点值和第一起始值之间的差值的一半和第一起始值求和,确定人脸区域的中心点的第一中心点参数;
根据第二终点值和第二起始值,例如,将第二终点值和第二起始值之间的差值的一半和第二起始值求和,确定人脸区域的中心点的第二中心点参数;
根据第三终点值和第三起始值,例如,将第三终点值和第三起始值之间的差值的一半和第三起始值求和,确定目标图像帧的中心点的第三中心点参数;
根据第四终点值和第四起始值,例如,将第四终点值和第四起始值之间的差值的一半和第四起始值求和,确定目标图像帧的中心点的第四中心点参数;
根据第一中心点参数、第三中心点参数、第二中心点参数、第四中心点参数,例如,将第一中心点参数和第三中心点参数之差的平方和第二中心点参数和第四中心点参数之差的平方的和开平方,以确定第五参数,其中,第五参数表示人脸区域的中心点和目标图像帧的中心点之间的距离;
根据第三终点值、第三起始值、第四终点值和第四起始值,例如,将第三终点值和第三起始值之间的差值的绝对值的一半的平方与第四终点值和第四起始值之间的差值的绝对值的一半的平方的和开平方,以确定第六参数,其中,第六参数表示目标图像帧的对角线长度的一半;
将第五参数与第六参数相除,确定偏离比率。
例如,第一中心点参数cx1表示为cx1=x0+(|x0-x1|)/2,第二中心点参数cy1表示为cy1=y0+(|y1-y0|)/2,第三中心点参数cx2表示为cx2=x10+(|x11-x10|)/2,第四中心点参数cy2表示为cy2=y10+(|y11-y10|)/2。第五参数dis表示为dis=sqrt((cx2-cx1)×(cx2-cx1)+(cy2-cy1)×(cy2-cy1)),第六参数dis_pre表示为dis_pre=sqrt((|x11-x10|)/2×(|x11-x10|)/2+(|y11-y10|)/2×(|y11-y10|)/2)。偏离比率dratio=dis/dis_pre。
可选地,在步骤S106中,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数,且目标图像帧的特征参数包括第三起 始值、第四起始值、第二宽度参数、第二高度参数的情况下,根据偏离比率、平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制,包括:
根据平均深度值和预设工作频率的乘积,确定第三参数;
将人脸区域的第一宽度参数与人脸区域的第一高度参数求和,确定第四参数;
根据偏离比率和预设工作频率的乘积,确定第七参数;
将第三参数与第四参数相除,确定第八参数;
将第七参数和第八参数求和,确定TOF传感器的更新后的工作频率。
可选地,在步骤S106中,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值,且目标图像帧的特征参数包括第三起始值、第三终点值、第四起始值、第四终点值的情况下,根据平均深度值、偏离比率和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制,包括:
根据平均深度值和预设工作频率的乘积,确定第三参数;
将第一终点值和第一起始值之间的差值的绝对值和第二终点值和第二起始值之间的差值的绝对值求和,确定第四参数;
根据偏离比率和预设工作频率的乘积,确定第七参数;
将第三参数与第四参数相除,确定第八参数;
将第七参数和第八参数求和,确定TOF传感器的更新后的工作频率。
例如,更新后的工作频率小于所述TOF传感器的工作频率的上限阈值,例如,TOF传感器的更新后的工作频率大于TOF传感器的工作频率的下限阈值。
可选地,TOF传感器的预设工作频率为f0,人脸区域的第一起始值为x0,人脸区域的第一终点值为x1,人脸区域的第二起始值为y0,人脸区域的第二终点值为y1,人脸区域的第一宽度参数为width0,人脸区域的第一高度参数为height0,平均深度值为avg0,偏离比率为dratio。在一些示例中,第三参数表示为f0×avg0,第四参数表示为width0+height0,第七参数表示为f0×dratio,第八参数表示为f0×avg0/(width0+height0),从而TOF传感器的更新后的工作频率f1=f0×avg0/(width0+height0)+f0×dratio;在另一些示例中,第三参数表示为f0×avg0,第四参数表示为|x1-x0|+|y1-y0|,第七参数表示为f0×dratio,第八参数表示为f0×avg0/(|x1-x0|+|y1-y0|),从而TOF传感器的更新后的工作频率 f1=f0×avg0/(|x1-x0|+|y1-y0|)+f0×dratio。根据TOF传感器与人脸区域之间的距离越远,即平均深度值为avg0越大,且偏离比率为dratio越大,则采集频率(即TOF传感器的更新后的工作频率)提高,提高支付的安全性;根据TOF传感器与人脸区域之间的距离越近,即平均深度值为avg0越小,且偏离比率为dratio越小,则采集频率降低,节省电量,降低功耗。
例如,在确定TOF传感器的更新后的工作频率后,该方法还可以包括将该更新后的工作频率存储到电子系统中,从而实施控制TOF传感器的采集频率,提高支付过程的安全性。
可选地,在根据人脸区域和已获取的目标图像帧的深度信息,确定与人脸区域的特征信息之前,即在执行步骤S20之前,该方法还包括:
根据预设应用的身份标识,判断预设应用是否是支付类型的应用;
若预设应用是支付类型的应用,则执行根据人脸区域和已获取的目标图像帧的深度信息,确定与人脸区域的特征信息的步骤;
若预设应用不是支付类型的应用,则不对TOF传感器的工作频率进行调节控制。
也就是说,根据人脸区域和已获取的目标图像帧的深度信息,确定与人脸区域的特征信息,包括:
若预设应用是支付类型的应用,根据人脸区域和已获取的目标图像帧的深度信息,确定与人脸区域的特征信息。然后,执行根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制的操作。也就是说,在本申请中,若预设应用是支付类型的应用时,才对TOF传感器的工作频率进行调节。
在预设应用不是支付类型的应用的情况下,则不改变TOF传感器的工作频率,即TOF传感器根据预设工作频率进行工作。
例如,预设应用被配置为控制图像采集装置获取目标图像帧。
本公开的一些实施例中提供了另一种TOF传感器的工作频率的控制方法,该方法的流程示意图如图2所示,需要说明的是,如图2所示的示例以特征信息包括平均深度值为例,如图2所示,该方法包括:
S201,开启支付场景的TOF传感器的工作频率控制的功能。
S202,获取启动摄像头传感器(camera sensor,即图像获取装置)的预设应用的ID。
可选地,预设应用的ID采用字符串表示,预设应用的ID为身份标识,用于区别各类应用,各类应用例如包括相机应用、微信应用、支付宝应用、某某银行应用等。
S203,加载TOF传感器的工作频率对应的预设参数表。
可选地,预设参数表中各参数例如,可以包括TOF传感器的预设采集频率f0、工作频率的上限阈值、工作频率的下限阈值、TOF传感器的频率优化系数等,TOF传感器的预设采集频率f0也就是上面描述的TOF传感器的预设工作频率f0。需要说明的是,预设参数表中的各个参数也可以由用户手动调节。
S204,图像采集装置开启,以获取预览视频流。
可选地,图像采集装置为摄像头,例如手机摄像头。
S205,根据预览视频流,获取预览数据帧;开启TOF传感器,获取深度数据帧。
可选地,预览数据帧为上面描述的即待处理图像帧,深度数据帧为与待处理图像帧对应的深度图像(即上面描述的深度信息),图3为TOF传感器获取的深度图像。
S206,将预览数据帧输入人脸检测模型中,人脸检测模型对预览数据帧进行人脸检测,判断预览数据帧中是否存在人脸,若存在人脸,则执行S207的操作,若不存在人脸,则执行S213的操作。
可选地,人脸检测模型可以对人脸中的关键点进行检测,人脸关键点检测包括以下操作:a):采集相当数量(例如:10万张)的人脸图像(底库);b):对步骤a)的人脸图像进行人脸关键点精准标注(包括不限于:脸的轮廓点、眼睛轮廓点、鼻子轮廓点、眉毛轮廓点、额头轮廓点、上嘴唇轮廓点、下嘴唇轮廓点等);c):对步骤b)的精准标注数据按一定比例划分为训练集、验证集、测试集;d):利用步骤c)的训练集对人脸检测模型(神经网络)进行训练,同时用验证集对训练过程中的人脸检测模型检测得到的中间结果进行验证(实时调整人脸检测模型的训练参数),当训练精度和验证精度都达到一定阈值时,停止训练过程,得到训练好的人脸检测模型;e):用测试集对步骤d)获得的训练好的人脸检测模型进行测试,衡量该训练好的人脸检测模型的性能和能力。
S207,获取预览数据帧中的人脸区域。
可选地,预览数据帧中的人脸区域(即上面描述的人脸区域)表示为Rect(x0,y0,width0,height0),人脸区域位于人脸区域平面坐标系中,在人脸区域 平面坐标系的横轴方向上,人脸区域的起始值为x0,在人脸区域平面坐标系的纵轴方向上,人脸区域的起始值为y0,人脸区域的第一宽度参数为width0,也就是说,在人脸区域平面坐标系的横轴方向上,人脸区域的宽度为width0,人脸区域的第一高度参数为height0,也就是说,在人脸区域平面坐标系的纵轴方向上,人脸区域的高度为height0。
S208,判断S202中的预设应用是否是支付类应用,若预设应用是支付类应用,则执行S209的操作,若预设应用不是支付类应用,则执行S213的操作。
S209,依据人脸区域,从当前的预览数据帧对应的深度信息中获取对应人脸区域的局部深度信息。
可选地,依据人脸区域Rect(x0,y0,width0,height0),从当前的预览数据帧的深度信息Depth(x,y)中获取对应人脸区域的局部深度信息Depth(xi,yi),其中,xi范围为:(x0,x0+width0),yi范围为:(y0,y0+height0)。
S210,确定局部深度信息Depth(xi,yi)的平均深度值avg0。
可选地,处理器可以运行下面的程序以执行确定局部深度信息Depth(xi,yi)的平均深度值avg0的操作:for(long x=x0;x<x0+width0;++x){
for(long y=y0;y<y0+height0;++y){
sum=sum+Depth(x,y);
}
}
float avg0=sum/(width0×height0)。
S211,依据人脸区域的平均深度值avg0和TOF传感器的预设工作频率f0,实时调节TOF传感器的工作频率,以确定TOF传感器的更新后的工作频率。
可选地,TOF传感器的更新后的工作频率f1可以表示为f1=f0×avg0/(width0+height0)。
根据TOF传感器与人脸区域之间的距离越远,即平均深度值为avg0越大,则更新后的工作频率提高,提高支付的安全性;根据TOF传感器与人脸区域之间的距离越近,即平均深度值为avg0越小,则更新后的工作频率降低,节省电量,降低功耗。
S212,将TOF传感器的更新后的工作频率(即实时调节后的TOF传感器的工作频率)更新到电子系统。
S213,判断预设应用是否结束,若预设应用结束,则执行S214的操作, 若预设应用没有结束,则执行S204的操作。
S214,关闭支付场景的TOF传感器的工作频率控制的功能。
可选地,如图4所示,在人脸区域平面坐标系统中,横坐标为平均深度avg0,纵坐标为TOF传感器的工作频率f;当预设应用为非支付类应用时,例如,相机应用、直播类应用等,TOF传感器的工作频率不会随着平均深度avg0进行调节。
可选地,如图5所示,横坐标为平均深度avg0,纵坐标为TOF传感器的工作频率f;当预设应用为支付类应用时,例如,微信应用、支付宝应用等,A点为启动支付类应用的时刻点,TOF传感器的工作频率随着平均深度avg0进行调节,平均深度avg0越大,TOF传感器的工作频率越高,保证支付过程的安全性。当横坐标的平均深度值avg0上升到一定值时,TOF传感器的工作频率(采集频率)达到工作频率的上限阈值时,此时,即使平均深度avg0继续增大,TOF传感器的工作频率也不会再变化。
应用本公开实施例,至少具有如下有益效果:
本公开的实施例提供的TOF传感器的工作频率的控制方法实现了对TOF传感器的工作频率进行动态调节控制,若TOF传感器与人脸区域之间距离越远,则实时将TOF传感器的工作频率提高,从而提高支付的安全性;若TOF传感器与人脸区域之间距离越近,则实时将TOF传感器的工作频率降低,从而节省电量,降低功耗;显著地提升了用户体验。
基于相同的发明构思,本公开实施例还提供了一种TOF传感器的工作频率的控制装置,该装置的结构示意图如图6所示,TOF传感器的工作频率的控制装置60包括第一处理模块601、第二处理模块602和第三处理模块603。
第一处理模块601,用于将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域;
第二处理模块602,用于根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息;
第三处理模块603,用于根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
可选地,第一处理模块601还用于获取多张待处理图像帧,TOF传感器用于采集确定各个待处理图像帧的深度信息。
可选地,第一处理模块601具体用于将多张待处理图像帧中的任一待处理 图像帧输入预设的人脸检测模型进行人脸检测,若检测到人脸,则将任一待处理图像作为目标图像帧,并确定目标图像帧中的人脸区域。
可选地,在一些示例中,特征信息包括人脸区域的平均深度值,第二处理模块602具体用于:根据人脸区域和由TOF传感器获取的所述目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息;根据人脸区域和局部深度信息,确定平均深度值。第三处理模块603用于:根据平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
可选地,第二处理模块602具体用于根据人脸区域的特征参数和目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
例如,在一些示例中,人脸区域的特征参数包括人脸区域的与第一方向对应的第一起始值、人脸区域的与第二方向对应的第二起始值、人脸区域的与第一方向对应的第一宽度参数、人脸区域的与第二方向对应的第一高度参数,此时,第二处理模块602具体用于根据人脸区域的与第一方向对应的第一起始值、人脸区域的与第二方向对应的第二起始值、人脸区域的与第一方向对应的第一宽度参数、人脸区域的与第二方向对应的第一高度参数和目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
例如,在另一些示例中,人脸区域的特征参数包括人脸区域的与第一方向对应的第一起始值和第一终点值、人脸区域的与第二方向对应的第二起始值和第二终点值,此时,第二处理模块602具体用于根据人脸区域的与第一方向对应的第一起始值和第一终点值、人脸区域的与第二方向对应的第二起始值和第二终点值和目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息。
例如,人脸区域位于人脸区域平面坐标系中,人脸区域的第一方向平行于人脸区域平面坐标系的横轴方向,即,人脸区域的第一方向包括人脸区域平面坐标系的横轴方向,人脸区域的第二方向平行于人脸区域平面坐标系的纵轴方向,即人脸区域的第二方向包括人脸区域平面坐标系的纵轴方向。
可选地,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值的情况下,第二处理模块602还具体用于将与人脸区域的各个局部对应的局部深度信息求和,确定第一参数;根据人脸区域的第一宽度参数与人脸区域的第一高度参数的乘积,确定第二参数;将第一参数与第二参数相除,确定平均深度值。
可选地,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数的情况下,第二处理模块602还具体用于将与人脸区域的各个局部对应的局部深度信息求和,确定第一参数;根据第一终点值和第一起始值之间的差值的绝对值和第二终点值和第二起始值之间的差值的绝对值的乘积,确定第二参数;将第一参数与所述第二参数相除,确定平均深度值。
可选地,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数的情况下,第三处理模块603具体用于根据平均深度值和TOF传感器的预设工作频率的乘积,确定第三参数;将人脸区域的第一宽度参数与人脸区域的第一高度参数求和,确定第四参数;将第三参数与第四参数相除,确定TOF传感器的更新后的工作频率。
可选地,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值的情况下,第三处理模块603具体用于根据平均深度值和预设工作频率的乘积,确定第三参数;将第一终点值和第一起始值之间的差值的绝对值和第二终点值和第二起始值之间的差值的绝对值求和,确定第四参数;将第三参数与第四参数相除,确定TOF传感器的更新后的工作频率。
例如,TOF传感器的更新后工作频率小于TOF传感器的工作频率的上限阈值。
可选地,在另一些实施例中,特征信息包括人脸区域的平均深度值、人脸区域的中心点与目标图像帧的中心点之间的偏离比率,第二处理模块602具体用于:根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定与人脸区域的各个局部对应的局部深度信息;根据人脸区域和局部深度信息,确定平均深度值;根据目标图像帧和人脸区域,确定偏离比率。第三处理模块603用于:根据偏离比率、平均深度值和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。
需要说明的是,第二处理模块602确定平均深度值的过程可以参考上面的相关描述,重复之处不再赘述。
可选地,第二处理模块602还具体用于根据人脸区域的特征参数和目标图像帧的特征参数,确定偏离比率。
可选地,在一些示例中,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数,且目标图像帧的特征参数包括第三起始值、第四起始值、第二宽度参数、第二高度参数的情况下,此时,第二处理模 块602还具体用于将第一宽度参数的一半和第一起始值求和,确定人脸区域的中心点的第一中心点参数;将第一高度参数的一半和第二起始值求和,确定人脸区域的中心点的第二中心点参数;将第二宽度参数的一半和第三起始值求和,确定目标图像帧的中心点的第三中心点参数;将第二高度参数的一半和第四起始值求和,确定目标图像帧的中心点的第四中心点参数;将第一中心点参数和第三中心点参数之差的平方和第二中心点参数和第四中心点参数之差的平方的和开平方,以确定第五参数,其中,第五参数表示人脸区域的中心点和目标图像帧的中心点之间的距离;将第二宽度参数的一半的平方和第二高度参数的一半的平方的和开平方,以确定第六参数,其中,第六参数表示目标图像帧的对角线长度的一半;将第五参数与第六参数相除,确定偏离比率。
可选地,在另一些示例中,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值,且目标图像帧的特征参数包括第三起始值、第三终点值、第四起始值、第四终点值的情况下,此时,第二处理模块602还具体用于将第一终点值和第一起始值之间的差值的一半和第一起始值求和,确定人脸区域的中心点的第一中心点参数;将第二终点值和第二起始值之间的差值的一半和第二起始值求和,确定人脸区域的中心点的第二中心点参数;将第三终点值和第三起始值之间的差值的一半和第三起始值求和,确定目标图像帧的中心点的第三中心点参数;将第四终点值和第四起始值之间的差值的一半和第四起始值求和,确定目标图像帧的中心点的第四中心点参数;将第一中心点参数和第三中心点参数之差的平方和第二中心点参数和第四中心点参数之差的平方的和开平方,以确定第五参数;将第三终点值和第三起始值之间的差值的绝对值的一半的平方和第四终点值和第四起始值之间的差值的绝对值的一半的平方的和开平方,以确定第六参数;将第五参数与第六参数相除,确定偏离比率。
可选地,在人脸区域的特征参数包括第一起始值、第二起始值、第一宽度参数和第一高度参数,且目标图像帧的特征参数包括第三起始值、第四起始值、第二宽度参数、第二高度参数的情况下,第三处理模块603具体用于根据平均深度值和预设工作频率的乘积,确定第三参数;将人脸区域的第一宽度参数与人脸区域的第一高度参数求和,确定第四参数;根据偏离比率和预设工作频率的乘积,确定第七参数;将第三参数与第四参数相除,确定第八参数;将第七参数和第八参数求和,确定TOF传感器的更新后的工作频率。
可选地,在人脸区域的特征参数包括第一起始值、第一终点值、第二起始值和第二终点值,且目标图像帧的特征参数包括第三起始值、第三终点值、第四起始值、第四终点值的情况下,第三处理模块603具体用于根据平均深度值和预设工作频率的乘积,确定第三参数;将第一终点值和第一起始值之间的差值的绝对值和第二终点值和第二起始值之间的差值的绝对值求和,确定第四参数;根据偏离比率和预设工作频率的乘积,确定第七参数;将第三参数与第四参数相除,确定第八参数;将第七参数和第八参数求和,确定TOF传感器的更新后的工作频率。
可选地,第二处理模块602还具体用于根据预设应用的身份标识,判断预设应用是否是支付类型的应用;若预设应用是支付类型的应用,根据人脸区域和已获取的目标图像帧的深度信息,确定与人脸区域的特征信息,然后,根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制。需要说明的是,若预设应用不是支付类型的应用,则不对TOF传感器的工作频率进行调节控制。
第一处理模块601用于执行上面描述的TOF传感器的工作频率的控制方法中的步骤S10的操作,第二处理模块602用于执行上面描述的TOF传感器的工作频率的控制方法中的步骤S20的操作,第三处理模块603用于执行上面描述的TOF传感器的工作频率的控制方法中的步骤S30的操作,关于第一处理模块601、第二处理模块602、第三处理模块603执行的具体操作可以参考上述TOF传感器的工作频率的控制方法的实施例,重复之处在此不再赘述。
例如,在本公开的一些实施例中,第一处理模块601、第二处理模块602和/或第三处理模块603可以是专用硬件器件,用来实现如上所述的该第一处理模块601、第二处理模块602和/或第三处理模块603的一些或全部功能。例如,第一处理模块601、第二处理模块602和/或第三处理模块603可以是一个电路板或多个电路板的组合,用于实现如上所述的功能。在本申请实施例中,该一个电路板或多个电路板的组合可以包括:(1)一个或多个处理器;(2)与处理器相连接的一个或多个非暂时的计算机可读的存储器;以及(3)处理器可执行的存储在存储器中的固件。
例如,在本公开的另一些实施例中,第一处理模块601、第二处理模块602和/或第三处理模块603包括存储在存储器中的代码和程序;处理器可以执行该代码和程序以实现如上所述的第一处理模块601、第二处理模块602和/或第 三处理模块603的一些功能或全部功能。
TOF传感器的工作频率的控制装置应用本公开实施例,至少具有如下有益效果:
TOF传感器的工作频率的控制装置实现了对TOF传感器的工作频率进行动态调节控制,若TOF传感器与人脸区域之间距离越远或若TOF传感器与人脸区域之间的距离越远且偏离比率为越大,则实时将TOF传感器的工作频率提高,从而提高支付的安全性;若TOF传感器与人脸区域之间距离越近或若TOF传感器与人脸区域之间的距离越近且偏离比率为越小,则实时将TOF传感器的工作频率降低,从而节省电量,降低功耗;显著地提升了用户体验。
本公开实施例提供的TOF传感器的工作频率的控制装置中未详述的内容,可参照上述实施例提供的TOF传感器的工作频率的控制方法的相关描述,本公开实施例提供的TOF传感器的工作频率的控制装置能够达到的有益效果与上述实施例提供的TOF传感器的工作频率的控制方法相同,在此不再赘述。
基于相同的发明构思,本公开实施例还提供了一种电子设备,该电子设备的结构示意图如图7所示,该电子设备7000包括至少一个处理器7001、存储器7002和总线7003,至少一个处理器7001均通过总线7003与存储器7002电连接;存储器7002被配置用于存储有至少一个计算机可执行指令,处理器7001被配置用于执行该至少一个计算机可执行指令,从而执行如本公开的任意一个实施例或任意一种可选实施方式提供的任意一种TOF传感器的工作频率的控制方法的步骤。
进一步,处理器7001可以是FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其它具有逻辑处理能力的器件,如MCU(Microcontroller Unit,微控制单元)、CPU(Central Process Unit,中央处理器)。
例如,存储器7002可以包括一个或多个计算机程序产品的任意组合,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、闪存等。在计算机可读存储介质上可以存储一个或多个计算机可执行指令,处理器7001可以运行计算机可执行指令,以实现各种功能。在计算机可读存储介质中还可以存储各种应用程序和各种数据、以 及应用程序使用和/或产生的各种数据等。
应用本公开实施例,至少具有如下有益效果:
电子设备实现了对TOF传感器的工作频率进行动态调节控制,若TOF传感器与人脸区域之间距离越远或若TOF传感器与人脸区域之间的距离越远且偏离比率为越大,则实时将TOF传感器的工作频率提高,从而提高支付的安全性;若TOF传感器与人脸区域之间距离越近或若TOF传感器与人脸区域之间的距离越近且偏离比率为越小,则实时将TOF传感器的工作频率降低,从而节省电量,降低功耗;显著地提升了用户体验。
基于相同的发明构思,本公开实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序用于被处理器执行时实现本公开的任意一个实施例或任意一种可选实施方式提供的任意一种TOF传感器的工作频率的控制方法的步骤。
本公开实施例提供的计算机可读存储介质包括但不限于任何类型的盘(包括软盘、硬盘、光盘、CD-ROM、和磁光盘)、ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随即存储器)、EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-Only Memory,电可擦可编程只读存储器)、闪存、磁性卡片或光线卡片。也就是,可读存储介质包括由设备(例如,计算机)以能够读的形式存储或传输信息的任何介质。
例如,在一些实施例中,该计算机可读存储介质可以应用于上述任一实施例提供的电子设备中,例如,其可以为电子设备中的存储器。
应用本公开实施例,至少具有如下有益效果:
将目标图像帧输入预设的人脸检测模型进行人脸检测,确定目标图像帧中的人脸区域;根据人脸区域和由TOF传感器获取的目标图像帧的深度信息,确定人脸区域的特征信息;根据特征信息和TOF传感器的预设工作频率,对TOF传感器的工作频率进行调节控制;如此,实现了对TOF传感器的工作频率进行动态调节控制,若TOF传感器与人脸区域之间距离越远或若TOF传感器与人脸区域之间的距离越远且偏离比率为越大,则实时将TOF传感器的工作频率提高,从而提高支付的安全性;若TOF传感器与人脸区域之间距离越近或若TOF传感器与人脸区域之间的距离越近且偏离比率为越小,则实时将TOF传感器的工作频率降低,从而节省电量,降低功耗;显著地提升了用户体验。
本技术领域技术人员可以理解,可以用计算机程序指令来实现这些结构图和/或框图和/或流图中的每个框以及这些结构图和/或框图和/或流图中的框的组合。本技术领域技术人员可以理解,可以将这些计算机程序指令提供给通用计算机、专业计算机或其他可编程数据处理方法的处理器来实现,从而通过计算机或其他可编程数据处理方法的处理器来执行本公开公开的结构图和/或框图和/或流图的框或多个框中指定的方案。
本技术领域技术人员可以理解,本公开中已经讨论过的各种操作、方法、流程中的步骤、措施、方案可以被交替、更改、组合或删除。进一步地,具有本公开中已经讨论过的各种操作、方法、流程中的其他步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。进一步地,现有技术中的具有与本公开中公开的各种操作、方法、流程中的步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。
以上所述仅是本公开的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。

Claims (19)

  1. 一种飞行时间TOF传感器的工作频率的控制方法,包括:
    将目标图像帧输入预设的人脸检测模型进行人脸检测,确定所述目标图像帧中的人脸区域;
    根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定所述人脸区域的特征信息;
    根据所述特征信息和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
  2. 根据权利要求1所述的方法,其中,根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定所述人脸区域的特征信息包括:
    根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的局部深度信息;
    根据所述人脸区域和所述局部深度信息,确定所述人脸区域的平均深度值,其中,所述特征信息包括所述平均深度值。
  3. 根据权利要求2所述的方法,其中,根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定所述人脸区域的特征信息还包括:
    根据所述目标图像帧和所述人脸区域,确定所述人脸区域的中心点与所述目标图像帧的中心点之间的偏离比率,其中,所述特征信息还包括所述偏离比率。
  4. 根据权利要求1-3任一项所述的方法,其中,所述将目标图像帧输入预设的人脸检测模型进行人脸检测之前,所述方法还包括:获取多张待处理图像帧,并通过所述TOF传感器获取确定所述多张待处理图像帧的深度信息;
    所述将目标图像帧输入预设的人脸检测模型进行人脸检测,确定所述目标图像帧中的人脸区域,包括:
    将所述多张待处理图像帧中的任一待处理图像帧输入所述预设的人脸检测模型进行人脸检测,若检测到人脸,则将所述任一待处理图像作为所述目标图像帧,并确定所述目标图像帧中的人脸区域。
  5. 根据权利要求2所述的方法,其中,所述根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局 部对应的局部深度信息,包括:
    根据所述人脸区域的特征参数和所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的所述局部深度信息。
  6. 根据权利要求3所述的方法,其中,根据所述目标图像帧和所述人脸区域,确定所述人脸区域的中心点与所述目标图像帧的中心点之间的偏离比率,包括:
    根据所述人脸区域的特征参数和所述目标图像帧的特征参数,确定所述偏离比率。
  7. 根据权利要求6所述的方法,其中,所述人脸区域的特征参数包括所述人脸区域的与第一方向对应的第一起始值、所述人脸区域的与第二方向对应的第二起始值、所述人脸区域的与所述第一方向对应的第一宽度参数、所述人脸区域的与所述第二方向对应的第一高度参数;或者,所述人脸区域的特征参数包括所述人脸区域的与第一方向对应的第一起始值和第一终点值、所述人脸区域的与第二方向对应的第二起始值和第二终点值;
    所述目标图像帧的特征参数包括所述目标图像帧的与所述第一方向对应的第三起始值、所述目标图像帧的与所述第二方向对应的第四起始值、所述目标图像帧的与所述第一方向对应的第二宽度参数、所述目标图像帧的与所述第二方向对应的第二高度参数;或者,所述目标图像帧的特征参数包括所述目标图像帧的与所述第一方向对应的第三起始值和第三终点值、所述目标图像帧的与所述第二方向对应的第四起始值和第四终点值;
    其中,所述人脸区域位于人脸区域平面坐标系中,所述人脸区域的所述第一方向平行于所述人脸区域平面坐标系的横轴方向,所述人脸区域的所述第二方向平行于所述人脸区域平面坐标系的纵轴方向。
  8. 根据权利要求7所述的方法,其中,所述根据所述人脸区域和所述局部深度信息,确定所述人脸区域的平均深度值,包括:
    将所述局部深度信息求和,确定第一参数;
    根据所述第一宽度参数与所述第一高度参数的乘积,确定第二参数,或者,根据所述第一终点值和所述第一起始值之间的差值的绝对值和所述第二终点值和所述第二起始值之间的差值的绝对值的乘积,确定第二参数;
    将所述第一参数与所述第二参数相除,确定所述平均深度值。
  9. 根据权利要求8所述的方法,其中,根据所述特征信息和所述TOF传 感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:
    根据所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
  10. 根据权利要求9所述的方法,其中,所述根据所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:
    根据所述平均深度值和所述预设工作频率的乘积,确定第三参数;
    将所述人脸区域的第一宽度参数与所述人脸区域的第一高度参数求和,确定第四参数,或者,将所述第一终点值和所述第一起始值之间的差值的绝对值和所述第二终点值和所述第二起始值之间的差值的绝对值求和,确定第四参数;
    将所述第三参数与所述第四参数相除,确定所述TOF传感器的更新后的工作频率,其中,所述更新后的工作频率小于所述TOF传感器的工作频率的上限阈值。
  11. 根据权利要求7所述的方法,其中,
    根据所述人脸区域的特征参数和所述目标图像帧的特征参数,确定所述偏离比率,包括:
    根据所述第一宽度参数和所述第一起始值,确定所述人脸区域的中心点的第一中心点参数,或者,根据所述第一终点值和所述第一起始值,确定所述人脸区域的中心点的第一中心点参数;
    根据所述第一高度参数和所述第二起始值,确定所述人脸区域的中心点的第二中心点参数,或者,根据所述第二终点值和所述第二起始值,确定所述人脸区域的中心点的第二中心点参数;
    根据所述第二宽度参数和所述第三起始值,确定所述目标图像帧的中心点的第三中心点参数,或者,根据所述第三终点值和所述第三起始值,确定所述目标图像帧的中心点的第三中心点参数;
    根据所述第二高度参数和所述第四起始值,确定所述目标图像帧的中心点的第四中心点参数,或者,根据所述第四终点值和所述第四起始值,确定所述目标图像帧的中心点的第四中心点参数;
    根据所述第一中心点参数、所述第三中心点参数、所述第二中心点参数、所述第四中心点参数,确定第五参数,其中,所述第五参数表示所述人脸区域的中心点和所述目标图像帧的中心点之间的距离;
    根据所述第二宽度参数和所述第二高度参数,确定第六参数,或者,根据所述第三终点值、所述第三起始值、所述第四终点值和所述第四起始值,确定第六参数,其中,所述第六参数表示所述目标图像帧的对角线长度的一半;
    将所述第五参数与所述第六参数相除,确定所述偏离比率。
  12. 根据权利要求11所述的方法,其中,根据所述特征信息和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:
    所述根据所述平均深度值、所述偏离比率和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
  13. 根据权利要求12所述的方法,其中,
    所述根据所述平均深度值、所述偏离比率和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制,包括:
    根据所述平均深度值和所述预设工作频率的乘积,确定第三参数;
    将所述人脸区域的第一宽度参数与所述人脸区域的第一高度参数求和,确定第四参数,或者,将所述第一终点值和所述第一起始值之间的差值的绝对值和所述第二终点值和所述第二起始值之间的差值的绝对值求和,确定第四参数;
    根据所述偏离比率和所述预设工作频率的乘积,确定第七参数;
    将所述第三参数与所述第四参数相除,确定第八参数;
    将所述第七参数和所述第八参数求和,确定所述TOF传感器的更新后的工作频率,其中,所述更新后的工作频率小于所述TOF传感器的工作频率的上限阈值。
  14. 根据权利要求1-13任一项所述的方法,其中,在所述根据所述人脸区域和获取的所述目标图像帧的深度信息,确定与所述人脸区域的特征信息之前,所述方法还包括:
    根据预设应用的身份标识,判断所述预设应用是否是支付类型的应用,其中,所述预设应用被配置为控制图像采集装置获取所述目标图像帧;
    若所述预设应用是支付类型的应用,则执行所述根据所述人脸区域和获取的所述目标图像帧的深度信息,确定与所述人脸区域的特征信息的步骤,
    若所述预设应用不是支付类型的应用,则不对所述TOF传感器的工作频率进行调节控制。
  15. 一种TOF传感器的工作频率的控制装置,包括:
    第一处理模块,用于将目标图像帧输入预设的人脸检测模型进行人脸检测, 确定所述目标图像帧中的人脸区域;
    第二处理模块,用于根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定所述人脸区域的特征信息;
    第三处理模块,用于根据所述特征信息和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
  16. 根据权利要求15所述的控制装置,其中,所述特征信息包括所述人脸区域的平均深度值,
    所述第二处理模块用于:根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的局部深度信息;根据所述人脸区域和所述局部深度信息,确定所述平均深度值;
    所述第三处理模块用于:根据所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
  17. 根据权利要求15所述的控制装置,其中,所述特征信息包括所述人脸区域的平均深度值、所述人脸区域的中心点与所述目标图像帧的中心点之间的偏离比率,
    所述第二处理模块用于:根据所述人脸区域和由所述TOF传感器获取的所述目标图像帧的深度信息,确定与所述人脸区域的各个局部对应的局部深度信息;根据所述人脸区域和所述局部深度信息,确定所述平均深度值;根据所述目标图像帧和所述人脸区域,确定所述偏离比率;
    所述第三处理模块用于:根据所述偏离比率、所述平均深度值和所述TOF传感器的预设工作频率,对所述TOF传感器的工作频率进行调节控制。
  18. 一种电子设备,包括:处理器、存储器;
    所述存储器,用于存储计算机程序;
    所述处理器,用于通过调用并运行所述计算机程序,执行上述权利要求1-14中任一项所述的TOF传感器的工作频率的控制方法。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序用于被处理器执行时实现如权利要求1-14中任一项所述的TOF传感器的工作频率的控制方法。
PCT/CN2019/101624 2019-04-18 2019-08-20 Tof传感器的工作频率的控制方法、装置、设备及介质 WO2020211231A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/424,427 US20220107398A1 (en) 2019-04-18 2019-08-20 Method for controlling working frequency of tof sensor, and apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910313354.4 2019-04-18
CN201910313354.4A CN110032979A (zh) 2019-04-18 2019-04-18 Tof传感器的工作频率的控制方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2020211231A1 true WO2020211231A1 (zh) 2020-10-22

Family

ID=67238965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101624 WO2020211231A1 (zh) 2019-04-18 2019-08-20 Tof传感器的工作频率的控制方法、装置、设备及介质

Country Status (3)

Country Link
US (1) US20220107398A1 (zh)
CN (1) CN110032979A (zh)
WO (1) WO2020211231A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110032979A (zh) * 2019-04-18 2019-07-19 北京迈格威科技有限公司 Tof传感器的工作频率的控制方法、装置、设备及介质
CN113296106A (zh) * 2021-05-17 2021-08-24 江西欧迈斯微电子有限公司 一种tof测距方法、装置、电子设备以及存储介质
TWI830363B (zh) * 2022-05-19 2024-01-21 鈺立微電子股份有限公司 用於提供三維資訊的感測裝置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140049767A1 (en) * 2012-08-15 2014-02-20 Microsoft Corporation Methods and systems for geometric phase unwrapping in time of flight systems
US20140152974A1 (en) * 2012-12-04 2014-06-05 Texas Instruments Incorporated Method for time of flight modulation frequency detection and illumination modulation frequency adjustment
CN109031333A (zh) * 2018-08-22 2018-12-18 Oppo广东移动通信有限公司 距离测量方法和装置、存储介质、电子设备
CN110032979A (zh) * 2019-04-18 2019-07-19 北京迈格威科技有限公司 Tof传感器的工作频率的控制方法、装置、设备及介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037135A1 (en) * 2012-07-31 2014-02-06 Omek Interactive, Ltd. Context-driven adjustment of camera parameters
KR20160025143A (ko) * 2014-08-26 2016-03-08 삼성디스플레이 주식회사 표시 장치의 구동 방법 및 이를 수행하기 위한 표시 장치
CN108270970B (zh) * 2018-01-24 2020-08-25 北京图森智途科技有限公司 一种图像采集控制方法及装置、图像采集系统
CN108965721B (zh) * 2018-08-22 2020-12-22 Oppo广东移动通信有限公司 摄像头模组的控制方法和装置、电子设备
CN109327626B (zh) * 2018-12-12 2020-09-11 Oppo广东移动通信有限公司 图像采集方法、装置、电子设备和计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140049767A1 (en) * 2012-08-15 2014-02-20 Microsoft Corporation Methods and systems for geometric phase unwrapping in time of flight systems
US20140152974A1 (en) * 2012-12-04 2014-06-05 Texas Instruments Incorporated Method for time of flight modulation frequency detection and illumination modulation frequency adjustment
CN109031333A (zh) * 2018-08-22 2018-12-18 Oppo广东移动通信有限公司 距离测量方法和装置、存储介质、电子设备
CN110032979A (zh) * 2019-04-18 2019-07-19 北京迈格威科技有限公司 Tof传感器的工作频率的控制方法、装置、设备及介质

Also Published As

Publication number Publication date
US20220107398A1 (en) 2022-04-07
CN110032979A (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
US10990803B2 (en) Key point positioning method, terminal, and computer storage medium
WO2020211231A1 (zh) Tof传感器的工作频率的控制方法、装置、设备及介质
US20210089755A1 (en) Face verification method and apparatus
US20220044040A1 (en) Liveness test method and apparatus
US10339402B2 (en) Method and apparatus for liveness detection
WO2018177237A1 (zh) 图像处理方法、装置和存储介质
KR102415509B1 (ko) 얼굴 인증 방법 및 장치
WO2022078041A1 (zh) 遮挡检测模型的训练方法及人脸图像的美化处理方法
WO2020093634A1 (zh) 基于人脸识别的照片添加方法、装置、终端及存储介质
KR102330322B1 (ko) 영상 특징 추출 방법 및 장치
US11205278B2 (en) Depth image processing method and apparatus, and electronic device
US10592759B2 (en) Object recognition apparatus and control method therefor
JP6096161B2 (ja) 情報処理装置および情報処理方法
WO2021203823A1 (zh) 图像分类方法、装置、存储介质及电子设备
CN108810406B (zh) 人像光效处理方法、装置、终端及计算机可读存储介质
US11380131B2 (en) Method and device for face recognition, storage medium, and electronic device
US20200012844A1 (en) Methods and devices for recognizing fingerprint
CN111492426A (zh) 注视启动的语音控制
WO2020248848A1 (zh) 智能化异常细胞判断方法、装置及计算机可读存储介质
JP2016081249A (ja) 情報処理装置および情報処理方法
KR20190025527A (ko) 전자 장치 및 그 제어 방법
WO2021036442A1 (zh) 循环保边平滑滤波的方法、装置和电子设备
JP2018045435A (ja) 検出装置、検出方法、および検出プログラム
WO2020001016A1 (zh) 运动图像生成方法、装置、电子设备及计算机可读存储介质
WO2022213349A1 (zh) 戴口罩人脸识别方法、装置、计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924725

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924725

Country of ref document: EP

Kind code of ref document: A1