CN113569592A - Distance detection method, device and storage medium - Google Patents

Distance detection method, device and storage medium Download PDF

Info

Publication number
CN113569592A
CN113569592A CN202010349760.9A CN202010349760A CN113569592A CN 113569592 A CN113569592 A CN 113569592A CN 202010349760 A CN202010349760 A CN 202010349760A CN 113569592 A CN113569592 A CN 113569592A
Authority
CN
China
Prior art keywords
target
distance
calibration value
image
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010349760.9A
Other languages
Chinese (zh)
Other versions
CN113569592B (en
Inventor
向延钊
翟元义
阳雷
王思宇
张甫帅
周宗旭
肖文渊
邱水兵
叶似锦
王婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Original Assignee
Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd filed Critical Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Priority to CN202010349760.9A priority Critical patent/CN113569592B/en
Priority to PCT/CN2020/129133 priority patent/WO2021218117A1/en
Publication of CN113569592A publication Critical patent/CN113569592A/en
Application granted granted Critical
Publication of CN113569592B publication Critical patent/CN113569592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a distance detection method, a device and a storage medium, wherein the method comprises the following steps: acquiring an image to be detected to obtain an image to be detected; carrying out portrait recognition on the image to be detected; when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected; determining a target calibration value according to the number of the pixels; and inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance.

Description

Distance detection method, device and storage medium
Technical Field
The embodiments of the present application relate to, but not limited to, the field of electronic technologies, and in particular, to a distance detection method, device, and storage medium.
Background
In the existing fans or other household appliances, the adjustment of the working mode is sometimes needed by obtaining the position of a person in the current space or the distance between the person and the household appliance. In the related art, the technical methods for measuring the position of a person or the distance between the person and a household appliance include infrared, Time of flight (TOF), Radio Frequency (RF), ultrasonic, image recognition and the like, wherein when the distance between the person and a fan is measured by using the image recognition technology, whether the person is present can be recognized, the distance between the person and the fan and the angle position of the fan can be determined, and the fan can conveniently adjust a blowing strategy according to the position or the angle position of the person. Therefore, how to accurately detect the distance between the person and the fan through the image recognition technology is a problem to be solved by those skilled in the art.
Disclosure of Invention
In view of this, embodiments of the present application provide a distance detection method, a distance detection device, and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in one aspect, an embodiment of the present application provides a distance detection method, where the method includes:
acquiring an image to be detected to obtain an image to be detected;
carrying out portrait recognition on the image to be detected;
when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected;
determining a target calibration value according to the number of the pixels;
and inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance.
On the other hand, an embodiment of the present application provides a distance detection apparatus, including:
the image sensor is used for acquiring images;
the processor is used for acquiring images through the image sensor to obtain an image to be detected; carrying out portrait recognition on the image to be detected; when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected; determining a target calibration value according to the number of the pixels; and inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance.
In yet another aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the method.
In the embodiment of the application, the head of the portrait is used as a main detection base point, the number of pixels occupied by characteristic line segments such as the width, the length, the width and the oblique lines of the head of the portrait in an image to be detected is detected, and the distance between a person corresponding to the portrait and an image sensor is determined by combining a specific calibration value and a mapping relation between the distance according to a target calibration value determined by the number of the pixels. Firstly, when the image is collected from different angles, the pixel change occupied by the characteristic line segments such as the width, the length and the width inclined lines corresponding to the head of the portrait is not large, so that when a person faces the image sensor from different angles, the distance between the person and the image sensor can be accurately judged by detecting the pixels occupied by the characteristic line segments of the head in the image to be detected. Secondly, compared with a method for judging the distance by detecting pixels occupied by the face area or other complex features in the related art, the method can relate to the situation that different distances of different users correspond to the same face area for the same face area, and when the distance is judged by detecting pixels occupied by the head feature line segment, the influence of the head feature line segment is small for different people, so that the distance judgment is more accurate. Thirdly, since the pixels occupied by the detection segments are calculated faster than the pixels occupied by the detection areas or other complex features in the related art when image detection is performed.
Furthermore, the distance between the person corresponding to the target calibration value determined according to the pixels occupied by the current head characteristic line segment in the image to be detected and the image sensor can be quickly and directly obtained through a mode of combining table look-up with curve fitting, so that the efficiency of distance detection can be improved, and the calculated amount can be reduced.
Drawings
Fig. 1 is a schematic diagram illustrating an implementation flow of a distance detection method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an implementation flow of a distance detection method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a distance detection device according to an embodiment of the present disclosure;
fig. 4A is a schematic structural diagram of a fan according to an embodiment of the present disclosure;
FIG. 4B is a schematic view of the position of a portrait within an image;
FIG. 4C is a schematic view of the position of a portrait within an image;
fig. 4D is a schematic diagram of a correspondence relationship between a pixel ratio and a distance obtained by actual measurement.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions of the present application are further described in detail with reference to the drawings and the embodiments, the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the application document, the following description is added, and where in the following description reference is made to the terms "first \ second \ third" merely to distinguish between similar objects and not to imply a particular ordering with respect to the objects, it is to be understood that "first \ second \ third" may be interchanged under certain circumstances or sequences, such that the embodiments of the application described herein may be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In the related art, the distance detection method through the image recognition technology mostly utilizes the human face features for detection, and the specific implementation steps can be as follows: firstly, judging that an object to be detected in an image acquired by an image sensor is a person through face recognition; and then indirectly measuring the distance between the person and the image sensor by detecting the pixel proportion occupied by the human face characteristics (such as the size of the human face, the length and the width of the human face, the area of the human face, the eyebrow distance and other dimension characteristics) in the image.
In the related art, when the distance between the person and the image sensor is measured by the pixel ratio of the face feature, when the image sensor is a side face of the person, the detected face size feature is greatly different from the face size feature detected when the face is a front face, and when the image sensor is a back head of the person, the face cannot be detected. Therefore, the distance between the person and the image sensor cannot be accurately detected based on the pixel proportion of the face size feature.
The embodiment of the application provides a distance detection method to solve the problem that the distance between a person and an image sensor cannot be accurately detected in the prior art. Fig. 1 is a schematic flowchart of an implementation process of a distance detection method provided in an embodiment of the present application, where the method may be executed by a processor, and as shown in fig. 1, the method includes:
s101, acquiring an image to be detected to obtain an image to be detected;
here, the image pickup may be performed by an image sensor, which may be any photosensitive element capable of image pickup. In implementation, the image sensor may be a camera, and the processor may control the camera to capture a two-dimensional planar image of the current space as an image to be detected.
Step S102, carrying out portrait recognition on the image to be detected;
here, the image recognition of the image to be detected is performed to determine whether a person is present in the current space. In implementation, an image recognition technology can be adopted to perform portrait-related feature recognition on an image to be detected.
Step S103, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of a person in the image to be detected;
here, the human head corresponds to a region included in the head outline in the human figure. In implementation, the area where the face and the hair of the portrait are located may be included, and the head of the person may include, but is not limited to, the front of the head (the side where the face of the person is located), the side of the head (any one of the two sides where the ears are located), and the back of the head (the side where the back of the head is located) according to the angle of the person facing the image sensor.
The characteristic line segment of the human head may be a line segment representing the human head in a specific dimension, and may include, but is not limited to, a human face length line segment, a human face width line segment, a human face length oblique line, a human head length line segment, a human head width line segment, a human head length oblique line, and the like. In implementation, a person skilled in the art may select an appropriate feature line segment according to actual situations, which is not limited in the embodiment of the present application.
The number of pixels occupied by the characteristic line segment in the image to be detected can be the pixel distance of two end points of the head of the person in the dimension corresponding to the characteristic line segment, and can also be the number of pixels corresponding to one row with the largest number of pixels in each row of pixels occupied by the head of the person in the dimension. For example, when the characteristic line segment is a human head length line segment, the number of pixels occupied by the human head length line segment in the image to be detected may be the pixel distance in the vertical direction between the uppermost pixel point and the lowermost pixel point in all pixels occupied by the human head, or may be the number of pixels corresponding to the row with the largest length in each row of pixels occupied by the human head. For another example, when the characteristic line segment is a human head width line segment, the number of pixels occupied by the human head width line segment in the image to be detected may be a pixel distance between a leftmost pixel point and a rightmost pixel point in all pixels occupied by the human head in the horizontal direction, or may be a pixel number corresponding to a row with the largest width in each row of pixels occupied by the human head. The skilled person can select an appropriate manner to determine the number of pixels occupied by the feature line segment according to the actual situation, which is not limited in the embodiment of the present application.
In some embodiments, when the head of the person is recognized as the front face, the feature line segment is a face width line segment, a face length line segment or a face length-width oblique line; when the head part is identified as the back part, the characteristic line segment is a head width line segment, a head length line segment or a head length-width oblique line; and when the head of the person is identified as the side face, the characteristic line segment is a face width line segment.
Step S104, determining a target calibration value according to the number of the pixels;
here, the calibration value is a value determined by a specific policy based on the number of pixels occupied by the feature line segment in the image to be detected, and is used for calibrating the distance between the person and the image sensor, and there is a one-to-one correspondence relationship between the calibration value and the distance. In implementation, the number of pixels may be directly determined as a target calibration value, or a ratio of the number of pixels to a total number of pixels of a specific dimension in an image to be detected may be determined as a target calibration value. A person skilled in the art may select an appropriate strategy to determine the target calibration value according to actual conditions during implementation, which is not limited in the embodiment of the present application.
Step S105, inquiring a specific first corresponding relation according to the target calibration value to obtain the distance between the person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing the mapping relation between the calibration value and the distance.
Here, the mapping relationship between the calibration value and the distance may be determined by actual measurement, or may be calculated by using a specific calculation formula according to an actual situation. In implementation, the first corresponding relation may be pre-stored in the local storage or the remote end, and is retrieved from the local storage or the remote end and queried when the query is needed.
In some embodiments, the characteristic line segment is used for characterizing the human head in a specific dimension, and the determining the number of pixels occupied by the characteristic line segment of the human head in the image to be detected includes: determining the number of pixels occupied by the human head in the specific dimension as the number of pixels; correspondingly, the determining a target calibration value according to the number of pixels includes: and determining the ratio of the number of the pixels to the total number of the pixels of the image to be detected in the specific dimension as the target calibration value.
The distance detection method provided by the embodiment of the application takes the head of the portrait as a main detection base point, detects the number of pixels occupied by characteristic line segments representing the width, the length, the width, the oblique lines and the like of the head of the portrait in an image to be detected, and determines the distance between a person corresponding to the portrait and an image sensor by combining a specific calibration value and a mapping relation between the distance according to a target calibration value determined by the number of the pixels. Firstly, when images are collected from different angles, the pixel change occupied by the characteristic line segments of the characteristic width, the length, the width, the inclined line and the like corresponding to the head of the portrait is not large, so that when a person faces the image sensor from different angles, the distance between the person and the image sensor can be accurately judged by detecting the pixels occupied by the characteristic line segments of the head in the image to be detected. Secondly, compared with a mode of judging the distance by detecting pixels occupied by the face area or other complex features in the related art, the condition that different distances of different users correspond to the same face area can be related to the same face area, and when the distance is judged by adopting the mode of detecting the pixels occupied by the head feature line segment, the influence of the head feature line segment on different people is small, so that the distance judgment is more accurate. Thirdly, the pixel occupied by the detection line segment is calculated more quickly than the pixel occupied by the detection area or other complex features in the related art when image detection is carried out.
An embodiment of the present application provides a distance detection method, which may be executed by a processor, and as shown in fig. 2, the method includes:
step S201, image acquisition is carried out to obtain an image to be detected;
step S202, carrying out portrait recognition on the image to be detected;
step S203, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected;
here, steps S201 to S203 correspond to the aforementioned steps S101 to S103, and may be implemented with reference to the embodiments of the aforementioned steps S101 to S103.
Step S204, determining the ratio of the number of the pixels to the number of the pixels occupied by the specific reference line as a target calibration value;
here, the specific reference line may be a reference line used for representing a specific dimension in the image to be detected, and the number of pixels occupied by the specific reference line may be a total number of pixels of the image to be detected in the specific dimension. In implementation, the specific dimension may be a dimension parallel or perpendicular to the characteristic line segment, or any other suitable dimension. The person skilled in the art can select an appropriate dimension to determine the reference line according to practical situations in implementation, which is not limited in the embodiment of the present application.
Step S205, querying a specific first corresponding relationship according to the target calibration value to obtain a distance between the person corresponding to the portrait and the image sensor, where the first corresponding relationship is used to represent a mapping relationship between the calibration value and the distance.
Here, step S205 corresponds to step S105 described above, and may be implemented with reference to the embodiment of step S105 described above.
An embodiment of the present application provides a distance detection method, which may be executed by a processor, and includes:
s301, acquiring an image to be detected to obtain an image to be detected;
step S302, carrying out portrait recognition on the image to be detected;
step S303, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of a person in the image to be detected;
here, steps S301 to S303 correspond to steps S101 to S103 described above, and may be implemented with reference to the embodiments of steps S101 to S103 described above.
Step S304, obtaining a starting point and an end point of the characteristic line segment;
step S305, determining the midpoint of the characteristic line segment according to the pixel number, the starting point and the end point;
step S306, determining the distance between the midpoint of the characteristic line segment and the reference line as a target calibration value;
here, the specific reference line may be a reference line for characterizing a specific dimension in the image to be detected. In implementation, the specific dimension may be a dimension parallel or perpendicular to the feature line segment, or any other suitable dimension. The skilled person can select an appropriate dimension to determine the reference line according to practical situations in implementation, which is not limited in the embodiment of the present application.
Step S307, inquiring a specific first corresponding relation according to the target calibration value to obtain the distance between the person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing the mapping relation between the calibration value and the distance.
Here, step S307 corresponds to step S105 described above, and may be implemented with reference to the embodiment of step S105 described above.
In some embodiments, the characteristic line segment is a face length line segment, and the reference line is a transverse reference line. Correspondingly, the step S306 includes: and determining the distance between the midpoint of the face length line segment and the transverse reference line as the target calibration value.
An embodiment of the present application provides a distance detection method, which may be executed by a processor, and includes:
s401, acquiring an image to be detected to obtain an image to be detected;
step S402, carrying out portrait recognition on the image to be detected;
step S403, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of a person in the image to be detected;
step S404, determining a target calibration value according to the number of the pixels;
here, steps S401 to S404 correspond to the aforementioned steps S101 to S104, and may be implemented with reference to the embodiments of the aforementioned steps S101 to S104.
Step S405, obtaining specific human body characteristics obtained by image recognition of the portrait;
here, the human body characteristics may include, but are not limited to, body fat rate, age, region, body width, and any human-related characteristic that may be associated with a size of a human head.
Step S406, updating the target calibration value according to the human body characteristics;
here, since there is a certain difference between head sizes (including head width, head length, face width, face length, and the like) of different persons, there is a correlation between human body characteristics and the head sizes of the persons. Therefore, in order to improve the accuracy in determining the distance between the person and the image sensor from the characteristic line segment of the head of the person, the target calibration value may be appropriately adjusted according to the characteristics of the human body.
In implementation, an appropriate manner may be selected to update the target calibration value according to the association relationship between the human body characteristics and the head size, which is not limited in the embodiment of the present application. For example, the higher the body fat rate is, the fattener the person is, the larger the head size of the person is, and therefore, according to the correlation between the body fat rate and the head size of the person, a ratio parameter k associated with the body fat rate may be set to adjust the determined target calibration value, so as to update the target calibration value. For another example, the determined target calibration value may be updated by setting a ratio parameter m associated with the age according to the association between the age and the size of the head of the person. For another example, the region to which the person belongs, such as a northern person, a southern person, and the like, may be determined according to the human body characteristics, and a ratio parameter m associated with the region may be set according to the correlation between the region to which the person belongs and the head size of the person to adjust the determined target calibration value. For another example, the width of the head or face corresponding to a person with a relatively large stature is also relatively wide, so that a ratio parameter n associated with the body width can be set according to the correlation between the width of the body of the person and the size of the head of the person to adjust the determined target calibration value. In implementation, the ratio parameter may be set by a user, or may be determined in a big data fitting manner, which is not limited in the embodiment of the present application.
Step S407, inquiring a specific first corresponding relation according to the updated target calibration value to obtain a distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance.
Here, step S407 corresponds to step S105 described above, and may be implemented with reference to the embodiment of step S105 described above.
In some embodiments, the human characteristic is body fat rate. Correspondingly, the step S405 includes: obtaining a target body fat rate obtained by performing image recognition on the portrait; the step S406 includes: determining a first ratio parameter corresponding to the target body fat rate according to a specific second corresponding relation, wherein the second corresponding relation is used for representing the incidence relation between the body fat rate and the first ratio parameter; and updating the target calibration value according to the target calibration value and the first ratio parameter. Here, the second correspondence may be a specific mapping table or a specific mapping formula, which is not limited in the embodiment of the present application. In implementation, the first corresponding relationship may be pre-stored in the local storage or the remote end, and when a query is required, the first corresponding relationship is obtained from the local storage or the remote end and is queried.
In some embodiments, the human characteristic is age. Correspondingly, the step S405 includes: obtaining a target age obtained by performing image recognition on the portrait; the step S406 includes: determining a second ratio parameter corresponding to the target age according to a specific third corresponding relation, wherein the third corresponding relation is used for representing the incidence relation between the age and the second ratio parameter; and updating the target calibration value according to the target calibration value and the second ratio parameter.
Here, the processor may perform age identification through an image recognition technique. When the method is implemented, the height of the person corresponding to the portrait can be identified, and the age of the person is judged according to the height; the head-body proportion of the portrait can be identified, and the age of the person can be judged according to the head-body proportion. In some embodiments, when the face of the person is included in the portrait, the corresponding age of the portrait may also be detected through face recognition techniques.
The third corresponding relationship may be a specific mapping relationship table or a specific mapping formula, which is not limited in this application. In implementation, the first corresponding relationship may be pre-stored in the local memory or the remote end, and when a query is required, the first corresponding relationship is obtained from the local memory or the remote end and is queried.
According to the distance detection method provided by the embodiment of the application, the target calibration value is properly adjusted according to the incidence relation between the specific human body characteristics and the head size of the person, so that the distance with higher accuracy can be obtained by inquiring the corresponding relation between the specific calibration value and the distance for the persons with different human body characteristics, and the accuracy of distance detection can be further improved.
An embodiment of the present application provides a distance detection method, which may be executed by a processor, and includes:
step S501, image acquisition is carried out to obtain an image to be detected;
step S502, carrying out portrait recognition on the image to be detected;
step S503, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected;
step S504, according to the number of the pixels, determining a target calibration value;
here, step S501 to step S504 may refer to the foregoing embodiments of step S101 to step S104 when implemented, and are not described here again.
Step S505, inquiring a specific mapping relation table according to the target calibration value to obtain a target distance between a person corresponding to the human image and the image sensor; the mapping relation table is used for representing the mapping relation between the calibration value and the distance.
Here, the mapping relationship table may be stored in the local storage after being determined in advance, the mapping relationship table includes a mapping relationship between a target calibration value and a distance, and by querying the table, a target distance between a person corresponding to the human image and the image sensor may be obtained.
It should be noted that the mapping relationship between the standard value and the distance in the mapping relationship table can be determined by actual measurement. In implementation, the distance between the person and the image sensor may be used as an independent variable, the calibration value determined according to the number of pixels of the characteristic line segment of the head of the person may be used as a dependent variable, the calibration value determined according to the number of pixels of the characteristic line segment of the head of the person may be used as an independent variable, the distance between the person and the image sensor may be used as a dependent variable, calibration measurement is performed with a specific calibration precision, and finally the mapping relationship between the calibration value and the distance is determined. When calibration measurement is performed, the measurement range of the distance between the person and the image sensor may be between the minimum distance and the maximum distance between the person and the image sensor, and the measurement range of the calibration value and the adopted calibration precision may be determined according to actual conditions.
Taking the distance between the person and the image sensor as an example of the independent variable, when the minimum distance between the person in the current space and the image sensor is 0 and the maximum distance is DmaxThe distance between the person and the image sensor is measured in a range of [0, D ]max]If the calibration accuracy used at this time is r, then calibration values W corresponding to distances r, 2r, … …, Nr between the person and the image sensor can be measured sequentially when calibration measurement is performedr,W2r,……,WNrWherein N is a positive integer and Nr is not more than Dmax. In some embodiments, the calibration accuracy r may be 0.1m, and W may be calibrated during actual measurement0.1Is 0.1m corresponding to a calibration value, W0.2The calibration value is 0.2 m, and by analogy, a corresponding mapping relation table of the calibration value and the distance can be obtained finally. In some embodiments, the calibration accuracy r may also be 1m or 0.05m, etc.
In the implementation process, in order to ensure the accuracy of measurement, the calibration values of the head regions corresponding to the same distance in the image to be detected can be measured for multiple times, and the average value is taken as the final measurement result.
The distance detection method provided by the embodiment of the application can quickly and directly obtain the distance between the person corresponding to the calibration value determined according to the number of the pixels occupied by the characteristic line segment of the head of the current person in the image to be detected and the image sensor in a table look-up mode, so that the efficiency of distance detection can be improved, and the calculated amount is reduced.
An embodiment of the present application provides a distance detection method, which may be executed by a processor, and includes:
step S601, image acquisition is carried out to obtain an image to be detected;
step S602, carrying out portrait recognition on the image to be detected;
step S603, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected;
step S604, determining a target calibration value according to the number of the pixels;
step S605, inquiring a specific mapping relation table according to the target calibration value to obtain a target distance between a person corresponding to the person image and the image sensor; the mapping relation table is used for representing the mapping relation between the calibration value and the distance;
here, in the implementation of steps S601 to S605, reference may be made to the implementation of steps S201 to S205, which is not described herein again.
Step S606, when the distance corresponding to the target calibration value does not exist in the mapping relation table, performing curve fitting on the corresponding relation between the calibration value and the distance in a specific calibration value interval based on data in the mapping relation table to obtain a curve fitting result; wherein the calibration value interval contains the target calibration value;
here, since the mapping relationship between each calibration value and the distance cannot be measured in actual measurement, when the distance between the person corresponding to the target calibration value and the image sensor does not exist in the mapping relationship table, the distance between the person corresponding to the target calibration value and the image sensor may be obtained by using a curve fitting method. In implementation, curve fitting may be performed based on all data in the mapping table, or may be performed based on part of data in the mapping table. Assuming that the calibration precision is r, the current target calibration value is w, and if w > nr and w < (n +1) r, the distance between the person corresponding to the target calibration value w and the image sensor can be calculated by means of curve fitting.
In some embodiments, the mapping relationship table may be queried for distances corresponding to two calibration values numerically closest to the target calibration value, and then perform a curve fitting on a corresponding relationship between the calibration value and the distance in a specific calibration value interval based on the two calibration values and the distances corresponding to the two calibration values, respectively. Here, the first order curve fitting may be a straight line fitting. In implementation, if the calibration precision is r, the target calibration value is w, and if w > nr and w < (n +1) r, distances respectively corresponding to the calibration values nr and (n +1) r may be first queried in the mapping relationship table, and then, based on the two calibration values and the corresponding distances, a curve fitting may be performed on the correspondence between the calibration values and the distances in the (nr, (n +1) r) interval. In the implementation process, the two calibration values and the corresponding distances may be mapped to two points on a two-dimensional coordinate with the calibration value as a horizontal axis and the distance as a vertical axis, and when a curve fitting is performed once, a straight line segment with the two points as end points may be obtained, where the straight line segment may represent the corresponding relationship between the calibration value and the distance in the (nr, (n +1) r) region.
And step S607, inquiring the curve fitting result according to the target calibration value to obtain the target distance between the person corresponding to the person image and the image sensor.
Here, the result of curve fitting includes the distance corresponding to the target calibration value, and the target distance between the corresponding person and the image sensor can be obtained through simple query.
According to the distance detection method provided by the embodiment of the application, the distance between the person corresponding to the target calibration value and the image sensor can be determined based on the existing data in the table by combining the mode of table look-up and curve fitting when the distance between the person corresponding to the target calibration value and the image sensor cannot be directly inquired in the table, so that the calibration precision can be properly increased when the mapping relation between the calibration value and the distance is calibrated and measured, and the times of calibration and measurement can be reduced on the premise of ensuring higher accuracy of curve fitting.
An embodiment of the present application provides a distance detection method, which may be executed by a processor, and includes:
s701, acquiring an image to be detected to obtain an image to be detected;
step S702, based on a preset human head sample, utilizing an image recognition technology to carry out human head matching on the image to be detected to obtain a corresponding matching rate;
here, the preset human head samples may include, but are not limited to, samples respectively corresponding to a front of a head (a side where a face of a person is located), a side of a head (any one of two sides where both ears are located), and a back of a head (a side where a hindbrain spoon is located). When the human head is matched, different types of samples such as the front side, the side and the rear side of the head of the image to be detected can be respectively matched with the image to be detected, and corresponding matching rates are respectively obtained; or after different types of samples such as the front of the head, the side of the head, and the like are respectively matched, the matching rate corresponding to each type is weighted according to specific weight distribution, and a total matching rate is obtained.
Step S703, when the matching rate is greater than a set matching threshold, determining that a portrait is identified in the image to be detected;
here, the matching rate may be a plurality of matching rates respectively corresponding to different types of head samples, and at this time, when any one of the plurality of matching rates is greater than a set matching threshold, it may be determined that a portrait is recognized in the image to be detected. The matching rate may also be a total matching rate obtained by integrating the matching rates corresponding to the head samples of each category, and at this time, when the total matching rate is greater than a set matching threshold, it may be determined that a portrait is recognized in the image to be detected.
Step S704, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of a person in the image to be detected;
step S705, determining a target calibration value according to the number of the pixels;
step S706, inquiring a specific first corresponding relation according to the target calibration value to obtain the distance between the person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing the mapping relation between the calibration value and the distance.
It should be noted that, in the implementation of the steps S701 and S704 to S706, reference may be made to the implementation of the steps S101 and S103 to S105, which is not described herein again.
According to the distance detection method provided by the embodiment of the application, whether the portrait is identified or not is judged through head matching, so that the portrait at different angles can be identified, and the accuracy of portrait identification is improved.
The embodiment of the application provides a distance detection method, which is applied to fan control and can be executed by a processor of a fan, and the method comprises the following steps:
step S801, image acquisition is carried out to obtain an image to be detected;
step S802, identifying the portrait of the image to be detected;
step S803, when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of a person in the image to be detected;
step S804, determining a target calibration value according to the number of the pixels;
step S805, inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance;
it should be noted that, in the implementation of the above steps S801 to S805, reference may be made to the implementation of the above steps S101 to S105, which is not described herein again.
Step S806, according to the corresponding relation between the specific distance and the wind speed, and according to the target distance between the person corresponding to the portrait and the image sensor, determining the target wind speed when the fan sends wind to the person corresponding to the portrait;
here, the correspondence between the specific distance and the wind speed may be a one-to-one mapping between the preset distance and the wind speed, or may be calculated according to a specific calculation formula. In some embodiments, it may be that the farther the target distance, the greater the wind speed.
Step S807, controlling a driving component of the fan, and driving the fan to supply air to a person corresponding to the portrait according to the target wind speed;
here, the driving assembly of the fan may include, but is not limited to, one or more of a direct current motor, a vertical swing head stepping motor, and a horizontal swing head stepping motor. During implementation, the direct current motor can be used as a power entity for rotating the fan, the vertical head-swinging stepping motor is used as a power entity for shaking the fan up and down, the horizontal head-swinging stepping motor is used as a power entity for shaking the fan left and right, and the control assembly can drive the fan to supply air to the person corresponding to the portrait according to the target air speed by controlling the direct current motor according to the corresponding rotating speed according to the target air speed.
Step S808, identifying the age of the portrait and determining a target age bracket corresponding to the portrait;
here, the processor may perform age identification through an image recognition technique. When the method is implemented, the height of the person corresponding to the portrait can be identified, and the age of the person is judged according to the height; the head-body proportion of the portrait can be identified, and the age of the person can be judged according to the head-body proportion. In some embodiments, when the face of the person is included in the portrait, the corresponding age of the portrait may also be detected through face recognition techniques, thereby determining the corresponding target age group.
And step S809, when the target distance is smaller than a first distance threshold value and the target age group is smaller than a specific age threshold value, controlling the fan to stop running.
Here, the first distance threshold may be a minimum safe distance of the fan from the child, and the specific age threshold may be an age defining whether or not it is the child. The first distance threshold and the age threshold may be values stored in a local memory after being preset by a user, or may be default values set by the processor. The processor may control the direct current motor to stop rotating to control the fan to stop operating, and may also control the power supply of the fan to be turned off, so as to implement the fan to stop operating.
In practice, the processor may determine whether a child is currently approaching the fan based on whether the target distance is less than a first distance threshold and whether the target age group is less than a specific age threshold. And when the target distance is smaller than a first distance threshold value and the target age range is smaller than a specific age threshold value, determining that a child approaches the fan, and controlling the fan to stop running by the processor.
In some embodiments, when it is determined that the target distance between the person to which the figure corresponds and the fan is less than the set first distance threshold and it is determined that the target age bracket to which the figure corresponds is less than the set age threshold, the control component may further control a buzzer of the fan to sound an alarm to alert a parent regarding whether a child is near the fan.
In some embodiments, the method may further comprise: when the target distance is smaller than a second distance threshold value, controlling the fan to stop running; wherein the second distance threshold is less than the first distance threshold.
Here, the second distance threshold may be a minimum safe distance between the fan and the adult, and the processor may also control the fan to stop operating when the distance between the adult and the fan is less than the minimum safe distance. In practice, the second distance threshold may be less than the first distance threshold, as adults have some awareness and ability to protect themselves.
According to the distance detection method, after the distance between the person and the fan is accurately detected through the image recognition technology, the air supply speed can be adjusted in real time according to the distance between the person and the fan, so that different air blowing requirements of users at different distances are met, and the use experience of the users is improved. In addition, when the fan is operated, can accurately detect the distance of people and fan and the age bracket that the people was located through image recognition technology after, confirm whether have children to appear in the dangerous distance of fan according to this distance and age bracket, when the discovery has children to be in dangerous distance, can initiatively close the fan to can send out the warning through bee calling organ, remind the head of a family to pay close attention to, thereby protection children's safety.
Based on the foregoing embodiments, an embodiment of the present application provides a distance detection apparatus, and fig. 3 is a schematic structural diagram of the distance detection apparatus provided in the embodiment of the present application, and as shown in fig. 3, the distance detection apparatus 900 includes: a processor 910, an image sensor 920, wherein:
the processor 910 is configured to perform image acquisition through the image sensor 920 to obtain an image to be detected; carrying out portrait recognition on the image to be detected; when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected; determining a target calibration value according to the number of the pixels; inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor 920, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance;
and an image sensor 920 for performing image acquisition.
Here, the processor may include, but is not limited to, any one or more of a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA). The image sensor may be any photosensitive element capable of image acquisition. In practice, the image sensor may be a camera. The processor can control the image sensor to acquire a two-dimensional plane image of the current space as an image to be detected by sending an image acquisition instruction to the image sensor.
In some embodiments, the processor is further configured to: determining the number of pixels occupied by the human head in the specific dimension as the number of pixels; and determining the ratio of the number of the pixels to the total number of the pixels of the image to be detected in the specific dimension as the target calibration value.
In some embodiments, the processor is further configured to: determining the ratio of the number of the pixels to the number of the pixels occupied by a specific reference line as the target calibration value; or,
obtaining a starting point and an end point of the characteristic line segment; determining the middle point of the characteristic line segment according to the pixel number, the starting point and the end point; and determining the distance between the midpoint of the characteristic line segment and the reference line as the target calibration value.
In some embodiments, the characteristic line segment is a face length line segment, and the reference line is a transverse reference line. Correspondingly, the processor is further configured to: and determining the distance between the midpoint of the face length line segment and the transverse reference line as the target calibration value.
In some embodiments, the processor is further configured to: when the head of the person is identified to be the front, the characteristic line segment is a face width line segment, a face length line segment or a face length-width oblique line; when the human head is identified as the back, the characteristic line segment is a human head width line segment, a human head length line segment or a human head length-width oblique line; and when the head of the person is identified as the side face, the characteristic line segment is a face width line segment.
In some embodiments, the processor is further configured to: obtaining a target body fat rate obtained by performing image recognition on the portrait; determining a first ratio parameter corresponding to the target body fat rate according to a specific second corresponding relation, wherein the second corresponding relation is used for representing the incidence relation between the body fat rate and the first ratio parameter;
in some embodiments, the processor is further configured to: updating the target calibration value according to the target calibration value and the first ratio parameter; and inquiring the first corresponding relation according to the updated target calibration value to obtain the target distance between the person corresponding to the portrait and the image sensor.
In some embodiments, the processor is further configured to: acquiring a target age obtained by performing image recognition on the portrait; determining a second ratio parameter corresponding to the target age according to a specific third corresponding relation, wherein the third corresponding relation is used for representing the incidence relation between the age and the second ratio parameter; updating the target calibration value according to the target calibration value and the second ratio parameter; and inquiring the first corresponding relation according to the updated target calibration value to obtain the target distance between the person corresponding to the portrait and the image sensor.
In some embodiments, the processor is further configured to: inquiring a specific mapping relation table according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor; the mapping relation table is used for representing the mapping relation between the calibration value and the distance.
In some embodiments, the processor is further configured to: when the distance corresponding to the target calibration value does not exist in the mapping relation table, performing curve fitting on the corresponding relation between the calibration value and the distance in a specific calibration value interval based on data in the mapping relation table to obtain a curve fitting result; wherein the calibration value interval contains the target calibration value; and inquiring the curve fitting result according to the target calibration value to obtain the target distance between the person corresponding to the portrait and the image sensor.
In some embodiments, the processor is further configured to: querying the distance corresponding to two calibration values which are closest to the target calibration value in terms of numerical value in the mapping relation table; and performing curve fitting on the corresponding relation between the calibration value and the distance in the specific calibration value interval on the basis of the two calibration values and the distances corresponding to the two calibration values respectively.
In some embodiments, the processor is further configured to: based on a preset human head sample, utilizing an image identification technology to carry out human head matching on the image to be detected to obtain a corresponding matching rate; and when the matching rate is greater than a set matching threshold value, determining that a portrait is identified in the image to be detected.
In some embodiments, the processor is further configured to: according to the corresponding relation between the specific distance and the wind speed, determining the target wind speed when the fan sends wind to the person corresponding to the portrait according to the target distance; controlling a driving component of the fan, and driving the fan to supply air to a person corresponding to the portrait according to the target wind speed; carrying out age identification on the portrait and determining a target age range corresponding to the portrait; and when the target distance is smaller than a first distance threshold value and the target age group is smaller than a specific age threshold value, controlling the fan to stop running.
An embodiment of the present application provides a distance detection device, where the device is a fan, and fig. 3 is a schematic structural diagram of the fan provided in the embodiment of the present application, and as shown in fig. 3, the fan 900 includes: a processor 910, an image sensor 920, wherein:
the processor 910 is configured to perform image acquisition through the image sensor 920 to obtain an image to be detected; carrying out portrait recognition on the image to be detected; when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected; determining a target calibration value according to the number of the pixels; inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor 920, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance;
and an image sensor 920 for performing image acquisition.
In addition, in the implementation, the fan may further include an air supply component for supplying air, a power supply device for supplying power to each component of the fan, a fan guard for protection, a display panel, a key panel, a buzzer for sound alarm, and the like.
Here, the processor may include, but is not limited to, any one or more of a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array (FPGA). The image sensor may be any photosensitive element capable of image acquisition. In practice, the image sensor may be a camera mounted on the fan. The processor can control the image sensor to acquire a two-dimensional plane image of the current space as an image to be detected by sending an image acquisition instruction to the image sensor.
In some embodiments, the processor is further configured to: according to the corresponding relation between the specific distance and the wind speed, determining the target wind speed when the fan sends wind to the person corresponding to the portrait according to the target distance between the person corresponding to the portrait and the image sensor; and controlling a driving component of the fan, and driving the fan to supply air to the person corresponding to the portrait according to the target wind speed. Correspondingly, the fan further comprises: and the driving component is used for driving the fan to supply air to the person corresponding to the portrait according to the target air speed.
Here, the correspondence between the specific distance and the wind speed may be a one-to-one mapping between the preset distance and the wind speed, or may be calculated according to a specific calculation formula. In some embodiments, it may be that the farther the distance between the person corresponding to the portrait and the image sensor, the greater the wind speed.
The drive assembly may include, but is not limited to, one or more of a dc motor, a vertical yaw stepper motor, and a horizontal yaw stepper motor. When the fan is implemented, the direct current motor can be used as a power entity for rotating the fan, the vertical head swinging stepping motor is used as a power entity for shaking the fan up and down, and the horizontal head swinging stepping motor is used as a power entity for shaking the fan left and right.
In some embodiments, the processor is further configured to: carrying out age identification on the portrait and determining a target age range corresponding to the portrait; and when the target distance is smaller than a first distance threshold value and the target age group is smaller than a specific age threshold value, controlling the fan to stop running.
In some embodiments, the method may further comprise: when the target distance is smaller than a second distance threshold value, controlling the fan to stop running; wherein the second distance threshold is less than the first distance threshold.
An embodiment of the present application provides a fan, and fig. 4A is a schematic view of a composition structure of the fan provided in the embodiment of the present application, and as shown in fig. 4A, the fan includes: image detection module 1001, automatically controlled board 1002, direct current motor 1003, perpendicular yaw step motor 1004, level yaw step motor 1005, demonstration, button, buzzer, wherein:
the image detection module 1001 is used for detecting key parameters related to the position and distance of a person positioned on the front face of the fan and transmitting the key parameters to the electric control board;
the electronic control board function 1002 is used for acquiring the key parameters transmitted by the image detection module 1001 and logically controlling the direct current motor 1003, the vertical head swinging stepping motor 1004 and the horizontal head swinging stepping motor 1005 to rotate according to the key parameters;
a dc motor 1003 for providing rotational power to the fan;
a vertical oscillating stepper motor 1004 for providing an up-and-down oscillating power for the fan;
and a horizontal head swinging stepping motor 1005 for providing a left-right head swinging power for the fan.
The human head comprises a front face (one side of the front face of the human face), a side face (any one of two sides of the front face of the human face), and a back face (one side of the back head).
Here, the method of detecting the distance between the person and the fan by the image detection module 1001 is as follows:
firstly, the image detection module matches the image acquired in real time with a preset human head sample through an image recognition algorithm, and the front, the side or the back of the head can be detected during matching. If the detected matching rate is greater than the set matching threshold, then it is indicated that a person is detected in the image.
The image detection module may then deduce the distance of the person from the fan or image sensor in the image detection module based on the percentage of pixels in the image of the person's head. Fig. 4B is a schematic diagram of a position of a human image in an image, where O is a camera, W is a pixel occupied by a head width of the human in the image, and a length of a line segment AB is a total pixel in a horizontal direction of the image. Fig. 4C is a schematic diagram of the position of the human figure in the image, where W is the pixel occupied by the head width of the human in the image, and the length of the line segment AB is the total pixel in the horizontal direction of the image. As can be seen from fig. 4B and 4C, when the distance from the fan or the camera to the person is different, the pixel ratio of the corresponding person in the image is also different.
Here, the ratio of the head width of the person in the image to the pixels is w, and the distance from the person to the fan corresponding to the pixel ratio w can be obtained by querying a mapping relation table of the pixel ratio and the distance and combining a curve fitting method.
In practice, the mapping relation table of the pixel proportion and the distance can be obtained through actual measurement. For example, W is calibrated in actual measurement0.1Is the head width pixel ratio of the person to the fan with the distance of 0.1m, W0.2The head width pixel ratio corresponding to the distance between the human and the fan being 0.2 m, and so on. During measurement, the calibration accuracy may be adjusted according to actual conditions, for example, the measurement may be calibrated with an accuracy of 1m or 0.05 m. Fig. 4D is a schematic diagram of a correspondence relationship between a pixel ratio and a distance obtained through actual measurement, in which an abscissa represents a pixel ratio occupied by a head width, an ordinate represents a distance from a person to a fan, and each point in the coordinates represents a correspondence relationship between a calibrated head width pixel ratio and a distance from a person to the fan.
And setting the calibration precision as r, if w is greater than nr and w < (n +1) r, calculating the distance between the person and the fan corresponding to the pixel proportion w in the occupied area (nr, (n +1) r) according to a curve fitting method. In practice, the fitted curve may be a primary curve or other curve.
The distance can be calculated by the ratio of the length of the head to the pixel, the ratio of the length of the long and wide oblique lines, and the ratio of the head area, in addition to the head width.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the distance detection method is implemented in the form of a software functional module and sold or used as a standalone product, the distance detection method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program realizes the steps of the above method when being executed by a processor.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the embodiments of the present application, the size of the sequence numbers of the above-mentioned processes does not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic thereof, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication between the components shown or discussed may be through some interfaces, indirect coupling or communication between devices or units, and may be electrical, mechanical or other.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a hardware form, and can also be realized in a form of hardware and a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; the storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the following claims.

Claims (15)

1. A method of distance detection, the method comprising:
acquiring an image to be detected to obtain an image to be detected;
carrying out portrait recognition on the image to be detected;
when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected;
determining a target calibration value according to the number of the pixels;
and inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance.
2. The method according to claim 1, wherein the characteristic line segment is used for characterizing the human head in a specific dimension, and the determining the number of pixels occupied by the characteristic line segment of the human head in the image to be detected comprises:
determining the number of pixels occupied by the human head in the specific dimension as the number of pixels;
correspondingly, the determining a target calibration value according to the number of pixels includes:
and determining the ratio of the number of the pixels to the total number of the pixels of the image to be detected in the specific dimension as the target calibration value.
3. The method of claim 1, wherein determining a target calibration value based on the number of pixels comprises:
determining the ratio of the number of the pixels to the number of the pixels occupied by the specific reference line as the target calibration value; or,
obtaining a starting point and an end point of the characteristic line segment; determining the midpoint of the characteristic line segment according to the pixel number, the starting point and the end point; and determining the distance between the midpoint of the characteristic line segment and the reference line as the target calibration value.
4. The method of claim 3, wherein the feature line segment is a face length line segment and the reference line is a transverse reference line;
correspondingly, the determining the distance from the midpoint of the feature line segment to the reference line as the target calibration value includes:
and determining the distance between the midpoint of the face length line segment and the transverse datum line as the target calibration value.
5. The method of claim 1,
when the human head is identified to be the front, the characteristic line segment is a human face width line segment, a human face length line segment or a human face length-width oblique line;
when the head part is identified as the back part, the characteristic line segment is a head width line segment, a head length line segment or a head length-width oblique line;
and when the head of the person is identified as the side face, the characteristic line segment is a face width line segment.
6. The method according to any one of claims 1 to 5, further comprising:
obtaining a target body fat rate obtained by performing image recognition on the portrait;
correspondingly, the querying a specific first corresponding relation according to the target calibration value to obtain a target distance between the person corresponding to the portrait and the image sensor includes:
determining a first ratio parameter corresponding to the target body fat rate according to a specific second corresponding relation, wherein the second corresponding relation is used for representing the incidence relation between the body fat rate and the first ratio parameter;
updating the target calibration value according to the target calibration value and the first ratio parameter;
and inquiring the first corresponding relation according to the updated target calibration value to obtain the target distance between the person corresponding to the portrait and the image sensor.
7. The method according to any one of claims 1 to 5, further comprising:
obtaining a target age obtained by performing image recognition on the portrait;
correspondingly, the querying a specific first corresponding relation according to the target calibration value to obtain a target distance between the person corresponding to the portrait and the image sensor includes:
determining a second ratio parameter corresponding to the target age according to a specific third corresponding relation, wherein the third corresponding relation is used for representing the incidence relation between the age and the second ratio parameter;
updating the target calibration value according to the target calibration value and the second ratio parameter;
and inquiring the first corresponding relation according to the updated target calibration value to obtain the target distance between the person corresponding to the portrait and the image sensor.
8. The method according to any one of claims 1 to 5, wherein said querying a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor comprises:
inquiring a specific mapping relation table according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor; the mapping relation table is used for representing the mapping relation between the calibration value and the distance.
9. The method of claim 8, wherein said querying a specific first corresponding relationship according to the target calibration value to obtain a target distance between the person corresponding to the portrait and the image sensor, further comprises:
when the distance corresponding to the target calibration value does not exist in the mapping relation table, performing curve fitting on the corresponding relation between the calibration value and the distance in a specific calibration value interval based on data in the mapping relation table to obtain a curve fitting result; wherein the calibration value interval contains the target calibration value;
and inquiring the curve fitting result according to the target calibration value to obtain the target distance between the person corresponding to the portrait and the image sensor.
10. The method of claim 9, wherein curve fitting the corresponding relationship between the calibration value and the distance in the specific calibration value interval based on the data in the mapping relation table comprises:
querying the distance corresponding to two calibration values which are closest to the target calibration value in terms of numerical value in the mapping relation table;
and performing curve fitting on the corresponding relation between the calibration values and the distances in the specific calibration value interval on the basis of the two calibration values and the distances corresponding to the two calibration values respectively.
11. The method according to any one of claims 1 to 5, wherein the human image recognition of the image to be detected comprises:
based on a preset human head sample, performing human head matching on the image to be detected by using an image recognition technology to obtain a corresponding matching rate;
and when the matching rate is greater than a set matching threshold value, determining that a portrait is identified in the image to be detected.
12. The method of claim 1, applied to a fan, further comprising:
according to the corresponding relation between the specific distance and the wind speed, determining the target wind speed when the fan sends wind to the person corresponding to the portrait according to the target distance;
controlling a driving component of the fan, and driving the fan to supply air to a person corresponding to the portrait according to the target wind speed;
carrying out age identification on the portrait and determining a target age range corresponding to the portrait;
and when the target distance is smaller than a first distance threshold value and the target age group is smaller than a specific age threshold value, controlling the fan to stop running.
13. A distance detection device, characterized in that the device comprises:
the image sensor is used for acquiring images;
the processor is used for acquiring images through the image sensor to obtain an image to be detected; carrying out portrait recognition on the image to be detected; when a portrait is identified in the image to be detected, determining the number of pixels occupied by the characteristic line segment of the head of the person in the image to be detected; determining a target calibration value according to the number of the pixels; and inquiring a specific first corresponding relation according to the target calibration value to obtain a target distance between a person corresponding to the portrait and the image sensor, wherein the first corresponding relation is used for representing a mapping relation between the calibration value and the distance.
14. The apparatus of claim 13, wherein the apparatus is a fan,
the processor is further configured to: according to the corresponding relation between the specific distance and the wind speed, determining the target wind speed when the fan sends wind to the person corresponding to the portrait according to the target distance between the person corresponding to the portrait and the image sensor; carrying out age identification on the portrait and determining a target age range corresponding to the portrait;
correspondingly, the fan further comprises:
the driving assembly is used for driving the fan to supply air to the person corresponding to the portrait according to the target wind speed;
the processor is further configured to: controlling a driving component of the fan, and driving the fan to supply air to a person corresponding to the portrait according to the target wind speed; and when the target distance is smaller than a first distance threshold value and the target age group is smaller than a specific age threshold value, controlling the fan to stop running.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 12.
CN202010349760.9A 2020-04-28 2020-04-28 Distance detection method, device and storage medium Active CN113569592B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010349760.9A CN113569592B (en) 2020-04-28 2020-04-28 Distance detection method, device and storage medium
PCT/CN2020/129133 WO2021218117A1 (en) 2020-04-28 2020-11-16 Distance measurement method, apparatus and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010349760.9A CN113569592B (en) 2020-04-28 2020-04-28 Distance detection method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113569592A true CN113569592A (en) 2021-10-29
CN113569592B CN113569592B (en) 2024-06-28

Family

ID=78157961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010349760.9A Active CN113569592B (en) 2020-04-28 2020-04-28 Distance detection method, device and storage medium

Country Status (2)

Country Link
CN (1) CN113569592B (en)
WO (1) WO2021218117A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463883A (en) * 2017-07-18 2017-12-12 广东欧珀移动通信有限公司 Biometric discrimination method and Related product
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108830164A (en) * 2018-05-22 2018-11-16 北京小鱼在家科技有限公司 Reminding method, device, computer equipment and the storage medium of screen viewed status
WO2019023863A1 (en) * 2017-07-31 2019-02-07 深圳传音通讯有限公司 Photographing method and photographing system based on intelligent terminal
CN109814401A (en) * 2019-03-11 2019-05-28 广东美的制冷设备有限公司 Control method, household appliance and the readable storage medium storing program for executing of household appliance

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100695174B1 (en) * 2006-03-28 2007-03-14 삼성전자주식회사 Method and apparatus for tracking listener's head position for virtual acoustics
CN109492590A (en) * 2018-11-13 2019-03-19 广东小天才科技有限公司 Distance detection method, distance detection device and terminal equipment
CN109934207A (en) * 2019-04-15 2019-06-25 华东师范大学 A kind of characteristic distance modification method of driver face based on facial expression fatigue driving detection algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463883A (en) * 2017-07-18 2017-12-12 广东欧珀移动通信有限公司 Biometric discrimination method and Related product
WO2019023863A1 (en) * 2017-07-31 2019-02-07 深圳传音通讯有限公司 Photographing method and photographing system based on intelligent terminal
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108830164A (en) * 2018-05-22 2018-11-16 北京小鱼在家科技有限公司 Reminding method, device, computer equipment and the storage medium of screen viewed status
CN109814401A (en) * 2019-03-11 2019-05-28 广东美的制冷设备有限公司 Control method, household appliance and the readable storage medium storing program for executing of household appliance

Also Published As

Publication number Publication date
CN113569592B (en) 2024-06-28
WO2021218117A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
WO2020135828A1 (en) Method and apparatus for controlling anti direct blowing air conditioner, storage medium, and computer device
CN109654693B (en) Control method and device for direct-blowing-preventing air conditioner, storage medium and computer equipment
WO2020135827A1 (en) Method and apparatus for controlling anti-direct blowing air conditioner, storage medium, and computer device
WO2020135839A1 (en) Method and apparatus for controlling anti-direct blowing air conditioner, storage medium and computer device
WO2020135842A1 (en) Method and apparatus for controlling anti-direct blowing air conditioner, storage medium and computer device
WO2020135830A1 (en) Method and apparatus for controlling anti-direct blowing air conditioner, storage medium and computer device
WO2020135826A1 (en) Control method and apparatus for anti-direct-blowing air conditioner, storage medium and computer device
WO2020135840A1 (en) Control method and device for anti-direct-blowing air conditioner, storage medium and computer device
WO2020135831A1 (en) Control method for air-deflector air conditioner, device, storage medium and computer equipment
CN112485782B (en) Sitting posture monitoring method and device, electronic equipment and readable storage medium
CN106919254B (en) Head-mounted virtual reality equipment and safety protection method
JP2011253531A (en) Electronic appliance and automatic adjustment method of opening/closing angle for the same
WO2022179207A1 (en) Window occlusion detection method and apparatus
CN110162085A (en) Environment self-adaption perception and avoidance system for unmanned vehicle
US9934735B2 (en) Display control method and electronic device
CN111256333A (en) Temperature adjusting method of air conditioner and air conditioner
CN113565781A (en) Fan, control method and device thereof, and storage medium
CN115136054A (en) System and method for detecting intrusion while in artificial reality
CN111240217B (en) State detection method and device, electronic equipment and storage medium
CN108604143B (en) Display method, device and terminal
CN113569592A (en) Distance detection method, device and storage medium
US20190026939A1 (en) Systems and methods for blind and visually impaired person environment navigation assistance
CN107015231B (en) Information processing method and electronic equipment
JP7389369B2 (en) Mobile control system
CN111973410A (en) Obstacle detection method and device, obstacle avoidance equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant