CN111898553A - Method and device for distinguishing virtual image personnel and computer equipment - Google Patents

Method and device for distinguishing virtual image personnel and computer equipment Download PDF

Info

Publication number
CN111898553A
CN111898553A CN202010762179.XA CN202010762179A CN111898553A CN 111898553 A CN111898553 A CN 111898553A CN 202010762179 A CN202010762179 A CN 202010762179A CN 111898553 A CN111898553 A CN 111898553A
Authority
CN
China
Prior art keywords
face
current
parameter value
image
size parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010762179.XA
Other languages
Chinese (zh)
Other versions
CN111898553B (en
Inventor
董勇
宁瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Xinchao Media Group Co Ltd
Original Assignee
Chengdu Xinchao Media Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Xinchao Media Group Co Ltd filed Critical Chengdu Xinchao Media Group Co Ltd
Priority to CN202010762179.XA priority Critical patent/CN111898553B/en
Publication of CN111898553A publication Critical patent/CN111898553A/en
Application granted granted Critical
Publication of CN111898553B publication Critical patent/CN111898553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of face recognition, and discloses a method, a device and computer equipment for distinguishing virtual image personnel. In the method, the index value of the current human face far and near degree and the two-dimensional coordinate data of the current human face of the field personnel can be extracted in sequence based on the human face image, and comparing the current face far and near degree index value with the current face far and near degree index range which is determined according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range and corresponds to the current face two-dimensional coordinate data, if the current face far and near degree index value is outside the current face far and near degree index range, it indicates that the field personnel do not move in the given space as the testing personnel for collecting the two-dimensional coordinate data of the human face and the index range of the far and near degree of the human face, and then can use less computer resource, come the judgement formation of image personnel whether for the virtual image personnel, need not to dispose extra infrared camera simultaneously, can reduce hardware cost.

Description

Method and device for distinguishing virtual image personnel and computer equipment
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a method and a device for distinguishing virtual image personnel and computer equipment.
Background
The current face detection algorithm has been developed quite well, for example, there are detection algorithm based on face key points and recognition algorithm based on deep learning, etc., and it can label the face mark frame and face key points with high precision, and make some deeper applications based on this, for example, head pose estimation is performed based on face detection algorithm, that is, the pose angle of the head is obtained by a face image (in three-dimensional space, the rotation pose of an object can be represented by three euler angles, i.e., a pitch angle pitch rotating around the X axis of a rectangular coordinate system, a yaw angle yaw rotating around the Y axis of the rectangular coordinate system, and a roll angle roll rotating around the Z axis of the rectangular coordinate system, so for the head, colloquially speaking, a head lift angle, a head roll angle, and a head rotation angle are shown in fig. 1), and the specific steps can be as follows: (1) performing two-dimensional face key point detection on the face image; (2) matching the detected two-dimensional face key points with corresponding face key points in the three-dimensional face model; (3) solving a conversion relation matrix of the two-dimensional face key points and the corresponding three-dimensional face key points; (4) and solving three Euler angles of the head relative to a camera coordinate system (the camera coordinate system is a three-dimensional rectangular coordinate system which is established by taking the focusing center of a camera for shooting the face image as an origin and taking the optical axis as the Z axis) according to the rotation relation matrix.
However, in a given space with mirrors (for example, mirrors are often arranged in an elevator), if human face detection is performed, virtual image persons in the mirrors are often used as real field persons, and further subsequent series of misjudgments are caused, and it is generally difficult to distinguish and judge the problems based on a visible light vision algorithm. The current solution is solved through the formation of image result based on infrared camera, because human heat can attenuate a lot through mirror reflection promptly, consequently can judge whether there is virtual image personnel to exist through thermodynamic diagram, but this kind of solution can additionally increase equipment cost, is unfavorable for actual popularization and application.
Disclosure of Invention
In order to solve the problem that virtual image personnel are difficult to distinguish or hardware cost is high when virtual image personnel are difficult to distinguish in a given space with a mirror for face detection, the invention aims to provide a method, a device, computer equipment and a computer readable storage medium for distinguishing the virtual image personnel.
In a first aspect, the present invention provides a method for identifying a virtual image person, including:
acquiring a face image, wherein the face image comprises at least one person;
extracting current face two-dimensional coordinate data and a current face distance index value of the person from the face image, wherein the current face distance index value is used for representing the current distance from the face of the person to image acquisition equipment, and the image acquisition equipment is used for acquiring the face image;
determining a current face far and near degree index range corresponding to the current face two-dimensional coordinate data according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range, wherein the face far and near degree index range is a face far and near degree index value interval which corresponds to the face two-dimensional coordinate data and can move in a given space;
and when the current face far and near degree index value is positioned outside the current face far and near degree index range, judging the personnel to be virtual image personnel.
Based on the above invention, a new method for realizing virtual image personnel identification in a mirror based on a visible light vision technology can be provided, namely, a current face far and near degree index value and current face two-dimensional coordinate data of field personnel are sequentially extracted based on a face image acquired on site, the current face far and near degree index value is compared with a current face far and near degree index range which is determined according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range and corresponds to the current face two-dimensional coordinate data, if the current face far and near degree index value is outside the current face far and near degree index range, the situation that the field personnel do not move in a given space as a test personnel for acquiring the face two-dimensional coordinate data and the face far and near degree index range is indicated, and further, less computer resources can be used, whether the imaging personnel are virtual image personnel is judged, and meanwhile, an extra infrared camera does not need to be configured, so that the hardware cost can be reduced, and the practical application and popularization are facilitated.
In one possible design, extracting the current face distance index value of the person from the face image includes:
and extracting a first face imaging size parameter value from the face image, wherein the first face imaging size parameter value is used as the current face distance degree index value.
Through the possible design, when the image acquisition equipment is a monocular camera, the distance degree of the face to the origin of a camera coordinate system is reflected by the size of the face imaging size, so that the Z-axis coordinate can be replaced under the condition that distance measurement cannot be carried out, the extractability of the index value of the distance degree of the current face is ensured, and the purpose of judging virtual image personnel is achieved.
In one possible design, extracting the current face distance index value of the person from the face image includes:
extracting a current face attitude angle and a first face imaging size parameter value from the face image;
and calculating to obtain a second face imaging size parameter value according to the current face attitude angle and the first face imaging size parameter value, wherein the second face imaging size parameter value is used as the index value of the far and near degree of the current face, and the second face imaging size parameter value is a face imaging size parameter value corresponding to the face of the person when the face of the person is perpendicular to the optical axis of the image acquisition equipment under the condition that the face coordinate position is unchanged.
Through the possible design, when the image acquisition equipment is a monocular camera, the far and near degree of the human face to the origin of a camera coordinate system is reflected by the size of the human face imaging size under the main visual angle, so that the Z-axis coordinate can be replaced under the condition that the distance can not be measured, the extractability of the current human face far and near degree index value and the accuracy of the subsequent virtual image personnel judgment result are ensured, and the purpose of judging the virtual image personnel is achieved.
In one possible design, extracting the current face distance index value of the person from the face image includes:
extracting a current face attitude angle and a first face imaging size parameter value from the face image;
calculating to obtain a second face imaging size parameter value according to the current face attitude angle and the first face imaging size parameter value, wherein the second face imaging size parameter value is a face imaging size parameter value corresponding to the face of the person when the face is perpendicular to the optical axis of the image acquisition equipment under the condition that the face coordinate position is unchanged;
identifying the age of the person according to the face image;
and when the age is smaller than the preset age, correcting the second face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value to obtain a third face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the third face imaging size parameter value is used as the current face distance degree index value.
Through the possible design, when the image acquisition equipment is a monocular camera, the far and near degree of the human face to the origin of a camera coordinate system is reflected by the size of the human face imaging size which is under the main visual angle and corrected by age factors, so that the Z-axis coordinate can be replaced under the condition of incapability of ranging, the extractability of the current human face far and near degree index value and the accuracy of the subsequent virtual image personnel judgment result are ensured, and the purpose of judging the virtual image personnel is achieved.
In one possible design, calculating a second face imaging size parameter value according to the current face pose angle and the first face imaging size parameter value includes:
leading the current face attitude angle into a trigonometric function for reflecting the rotation of the face onto a plane to obtain a rotation transformation coefficient, wherein the plane is vertical to the optical axis of the image acquisition equipment;
and calculating to obtain the second face imaging size parameter value according to the first face imaging size parameter value and the rotation transformation coefficient.
Through the possible design, the real and accurate human face main view size parameters can be obtained, and the accuracy of subsequent judgment is ensured.
In one possible design, extracting the current face distance index value of the person from the face image includes:
extracting a first face imaging size parameter value from the face image;
identifying the age of the person according to the face image;
and when the age is smaller than the preset age, correcting the first face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value to obtain a fourth face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the fourth face imaging size parameter value is used as the current face distance degree index value.
Through the possible design, when the image acquisition equipment is a monocular camera, the human face imaging size corrected by age factors is utilized to reflect the far and near degree of the human face to the origin of a camera coordinate system, so that the Z-axis coordinate can be replaced under the condition that distance measurement cannot be carried out, the extractability of the current human face far and near degree index value and the accuracy of a subsequent virtual image personnel judgment result are ensured, and the purpose of judging the virtual image personnel is achieved.
In one possible design, determining a current face near-far degree index range corresponding to the current face two-dimensional coordinate data according to a corresponding relationship between the face two-dimensional coordinate data and the face near-far degree index range, including:
and importing the current face two-dimensional coordinate data into a continuous curve fitting function, and calculating to obtain a current face far and near degree index range corresponding to the current face two-dimensional coordinate data, wherein the continuous curve fitting function is obtained by fitting according to multiple groups of actually measured face far and near degree index ranges and face two-dimensional coordinate data.
Through the possible design, after the face far and near degree index range and the face two-dimensional coordinate data which are actually measured in a limited group are obtained, the current face far and near degree index ranges corresponding to different variables are obtained in a refined mode through a curve function fitting technology, and therefore the acquisition work is reduced, and meanwhile the accuracy of follow-up judgment is guaranteed.
In a second aspect, the invention provides a device for distinguishing virtual image personnel, which comprises an image acquisition unit, a data extraction unit, a range determination unit and a virtual image distinguishing unit, wherein the image acquisition unit, the data extraction unit, the range determination unit and the virtual image distinguishing unit are sequentially connected in a communication manner;
the image acquisition unit is used for acquiring a face image, wherein the face image comprises at least one person;
the data extraction unit is used for extracting current face two-dimensional coordinate data and a current face distance index value of the person from the face image, wherein the current face distance index value is used for representing the current distance from the face of the person to image acquisition equipment, and the image acquisition equipment is used for acquiring the face image;
the range determining unit is used for determining a current face far and near degree index range corresponding to the current face two-dimensional coordinate data according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range, wherein the face far and near degree index range is a face far and near degree index value interval which corresponds to the face two-dimensional coordinate data and can move in a given space;
and the virtual image judging unit is used for judging that the personnel is the virtual image personnel when the index value of the far and near degree of the current face is positioned outside the index range of the far and near degree of the current face.
In one possible design, the data extraction unit includes a first size parameter extraction subunit;
and the first size parameter extraction subunit is configured to extract face two-dimensional coordinate data from the face image, where the face two-dimensional coordinate data is used as the current face two-dimensional coordinate data.
In one possible design, the data extraction unit includes a first size parameter extraction subunit and a second size parameter extraction subunit;
the first size parameter extraction subunit is configured to extract a current face pose angle and a first face imaging size parameter value from the face image;
and the second size parameter extraction subunit is in communication connection with the first size parameter extraction subunit, and is configured to calculate a second face imaging size parameter value according to the current face pose angle and the first face imaging size parameter value, where the second face imaging size parameter value is used as the current face near-far degree index value, and the second face imaging size parameter value is a face imaging size parameter value corresponding to a face of the person when the face of the person is perpendicular to the optical axis of the image acquisition device under a condition that a face coordinate position is unchanged.
In one possible design, the data extraction unit includes a first size parameter extraction subunit, a second size parameter extraction subunit, an age extraction subunit, and a third size parameter extraction subunit;
the first size parameter extraction subunit is configured to extract a current face pose angle and a first face imaging size parameter value from the face image;
the second size parameter extraction subunit is in communication connection with the first size parameter extraction subunit, and is configured to calculate a second face imaging size parameter value according to the current face pose angle and the first face imaging size parameter value, where the second face imaging size parameter value is a face imaging size parameter value corresponding to a face of the person when the face is perpendicular to an optical axis of the image acquisition device under a condition that a face coordinate position is unchanged;
the age extracting subunit is used for identifying the age of the person according to the face image;
and the third size parameter extraction subunit is respectively in communication connection with the second size parameter extraction subunit and the age extraction subunit, and is configured to, when the age is smaller than a preset age, correct the second face imaging size parameter value according to a proportional relationship between a child face size standard parameter value and an adult face size standard parameter value to obtain a third face imaging size parameter value, where the child face size standard parameter value corresponds to the age, and the third face imaging size parameter value is used as the current face distance degree index value.
In one possible design, the second size parameter extraction sub-unit comprises a coefficient acquisition grandchild unit and a size calculation grandchild unit which are connected in a communication manner;
the coefficient obtaining unit is used for importing the current face attitude angle into a trigonometric function for reflecting the face rotation onto a plane to obtain a rotation transformation coefficient, wherein the plane is vertical to the optical axis of the image acquisition equipment;
and the size calculating unit is used for calculating to obtain the second face imaging size parameter value according to the first face imaging size parameter value and the rotation transformation coefficient.
In one possible design, the data extraction unit includes a first size parameter extraction subunit, an age extraction subunit, and a fourth size parameter extraction subunit;
the first size parameter extraction subunit is configured to extract a first face imaging size parameter value from the face image;
the age extracting subunit is used for identifying the age of the person according to the face image;
and the fourth size parameter extraction subunit is respectively in communication connection with the first size parameter extraction subunit and the age extraction subunit, and is used for correcting the first face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value when the age is smaller than the preset age to obtain a fourth face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the fourth face imaging size parameter value is used as the current face distance degree index value.
In a possible design, the range determining unit is specifically configured to import the current face two-dimensional coordinate data into a continuous curve fitting function, and calculate to obtain a current face near-far degree index range corresponding to the current face two-dimensional coordinate data, where the continuous curve fitting function is obtained by fitting according to multiple sets of actually measured face near-far degree index ranges and face two-dimensional coordinate data.
In a third aspect, the present invention provides a computer device, comprising a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the method for identifying a virtual image person as described in the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon instructions which, when run on a computer, perform the method for discriminating a virtual image person as described in the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of discriminating a virtual image person as described in the first aspect or any one of the possible designs of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a diagram illustrating a head posture in a case where a head is raised, shaken, and turned in the related art.
Fig. 2 is an exemplary diagram of a positional relationship between an image capturing device, a mirror, and a human face according to the present invention.
Fig. 3 is an exemplary diagram of imaging of an actual image face and a virtual image face provided by the present invention.
Fig. 4 is a schematic flow chart of a method for identifying a virtual image person provided by the present invention.
Fig. 5 is a schematic structural diagram of an apparatus for distinguishing a virtual image person according to the present invention.
Fig. 6 is a schematic structural diagram of a computer device provided by the present invention.
In the above drawings: 1-an image acquisition device; 2-a mirror; 31-real image human face; 32-virtual image human face; 4-imaging field of view; 5-grid.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative designs, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
As shown in fig. 2 to 4, the method for identifying a virtual person provided in the first aspect of the present embodiment may be, but is not limited to, suitable for face detection and tracking in places with mirrors, such as shops, airports, exhibition halls, and elevators. As shown in fig. 2, in a square elevator space, since the visual field range of the image capturing device 1 (which may be, but is not limited to, a binocular camera or a monocular camera) includes a virtual image region of the mirror 2, for a virtual image person appearing in the virtual image region, the technical solution provided by the first aspect of the present embodiment needs to be adopted for identification. The method for identifying the virtual image person may include, but is not limited to, the following steps S101 to S104.
S101, obtaining a face image, wherein the face image comprises at least one person.
In the step S101, the face image may be, but is not limited to, acquired by the image acquisition apparatus 1 shown in fig. 2, and the face image may be acquired when the person appears in the imaging field of view 4 of the image acquisition apparatus 1.
And S102, extracting current face two-dimensional coordinate data and current face far and near degree index values of the person from the face image, wherein the current face far and near degree index values are used for representing the current far and near degree from the face of the person to image acquisition equipment, and the image acquisition equipment is used for acquiring the face image.
In step S102, when the image capturing device 1 is a binocular camera, the distance from the face to the origin O of the camera coordinate system (i.e. the Z-axis coordinate of the face in the camera coordinate system) can be obtained by using the principle of binocular distance measurement (in the case of binocular distance measurement, the face image includes respective images of the binocular), i.e. the current distance from the face of the person to the image capturing device 1 is reflected, and since the face image is perpendicular to the optical axis of the image capturing device 1 (i.e. the Z-axis in the camera coordinate system), the X-axis coordinate and the Y-axis coordinate of the face in the camera coordinate system can be directly obtained based on the coordinate position of the face in the face image, as shown in fig. 2, when the imaging plane of the image capturing device is divided into a plurality of grids 5, each grid 5 represents a specific coordinate position in the XY plane, the grid positions occupied by the face are the X-axis coordinate and the Y-axis coordinate, so that the Z-axis coordinate of the face in a camera coordinate system can be used as the index value of the distance degree of the current face, and the X-axis coordinate and the Y-axis coordinate of the face in the camera coordinate system can be used as the two-dimensional coordinate data of the current face.
S103, determining a current face far and near degree index range corresponding to the current face two-dimensional coordinate data according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range, wherein the face far and near degree index range refers to a face far and near degree index value interval which corresponds to the face two-dimensional coordinate data and can move in a given space.
In the step S103, the face distance index value section includes a pair of a face distance index upper limit value and a face distance index lower limit value, and is used to confirm that the person is moving in a given space when corresponding to the face two-dimensional coordinate data in the section. The two-dimensional face coordinate data, the upper limit value of the face distance index and the lower limit value of the face distance index can be acquired in advance in the same manner as the step S102 when the tester moves in a given space. As shown in fig. 2, in the square elevator space, the XYZ coordinate system is the camera coordinate system, in the quadrangular pyramid having the focus center of the image pickup device 1 as the vertex and the optical axis as the center line, the bottom surface of the quadrangular pyramid is the imaging visual field 4, at this time, 4 x 5 grids can be divided on the XY plane, the human face two-dimensional coordinate data acquisition system can be further divided into a plurality of rectangular pyramid space on the Z-axis dimension, the XY coordinates of each rectangular pyramid space are taken as the human face two-dimensional coordinate data, the corresponding upper limit value and the lower limit value of the Z-axis coordinate are respectively acquired aiming at the XY coordinates of each rectangular pyramid space, when the face occupies a certain rectangular pyramid-shaped table space, the corresponding upper limit value and lower limit value of the Z-axis coordinate can be found out according to the obtained current XY coordinates (namely the current face two-dimensional coordinate data) to be used as the distance degree index range of the current face. In addition, as shown in fig. 3, according to the principle of plane mirror imaging, the distance from the virtual image to the lens is the total distance from the object to the lens and then reflected to the actual optical path of the lens, so that the object distance of the real image face 31 is smaller than the object distance of the virtual image face 32, and the Z-axis coordinate corresponding to the real image face 31 is inevitably smaller than the Z-axis coordinate corresponding to the virtual image face 32 from the perspective of the camera; generally, when the Z-axis coordinate is greater than the corresponding upper limit value of the Z-axis coordinate, the person may be determined to be a virtual image person. Therefore, if the current face far and near degree index value obtained at this time is out of the current face far and near degree index range corresponding to the current face two-dimensional coordinate data, the fact that the person is out of the given space is indicated, and the fact that the person is necessarily a virtual image person can be judged.
And S104, when the current face far and near degree index value is located outside the current face far and near degree index range, judging that the person is a virtual image person.
In step S104, when the current face distance index value is outside the current face distance index range, it indicates that the person does not move in a given space as a test person, and it may be determined that the person is a virtual image person. On the contrary, when the current face distance degree index value is within the current face distance degree index range, the person can be determined to be a real image person.
Therefore, through the technical solution of discrimination described in detail in the above steps S101 to S104, a new method for realizing virtual image person identification in a mirror based on the visible light vision technology can be provided, that is, based on a face image acquired on site, current face far and near degree index values and current face two-dimensional coordinate data of the on-site person are sequentially extracted, the current face far and near degree index values are compared with a current face far and near degree index range determined according to a corresponding relationship between the face two-dimensional coordinate data and the face far and near degree index range and corresponding to the current face two-dimensional coordinate data, if the current face far and near degree index values are outside the current face far and near degree index range, it is indicated that the on-site person does not move in a given space as a test person for acquiring the face two-dimensional coordinate data and the face far and near degree index range, and then can use less computer resource, come the judgement formation of image personnel whether for the virtual image personnel, need not to dispose extra infrared camera simultaneously, can reduce the hardware cost, be convenient for practical application and popularization.
In this embodiment, on the basis of the technical solution of the first aspect, a first possible design is further specifically provided for extracting the current face near-far degree index value when the image capturing device is a monocular-head camera, that is, extracting the current face near-far degree index value of the person from the face image, where the first possible design includes, but is not limited to, the following step S211.
S211, extracting a first face imaging size parameter value from the face image, wherein the first face imaging size parameter value is used as the current face distance degree index value.
In step S211, the first face imaging size parameter value is a face imaging size of a face in the face image, and may be obtained by directly comparing the size of the acquired photograph with the size of the acquired photograph. Considering that the imaging result of the monocular camera cannot be measured, the first face imaging size parameter value may be used to replace the Z-axis coordinate as the index value of the current face far and near degree in the first aspect, that is, the far and near degree from the face to the origin O of the camera coordinate system is reflected by the size of the face imaging size. Specifically, the first face imaging size parameter value may include, but is not limited to, an area value of a face image mark frame, an area value of any face key region, and/or a distance value between any two face key points, and the like, where the face key region and the face key point are both detected from the face image (based on an existing face key point detection method). For example, the face key region may be, but is not limited to, a face region, an eye region, a nose region, a mouth region, or the like, and the distance value between the two face key points may be, but is not limited to, a pupil distance value, or the like. As shown in fig. 3, according to the plane mirror imaging principle, the distance from the virtual image to the lens is the total distance from the object to the lens and then reflected to the actual optical path of the lens, so that the object distance of the real image face 31 is smaller than the object distance of the virtual image face 32, and the size of the face corresponding to the real image face 31 is inevitably larger than the size of the face corresponding to the virtual image face 32 from the perspective of the camera; generally, when the current face distance index value is smaller than the corresponding face distance index lower limit value, it may be determined that the person is a virtual image person.
Therefore, through the possible design one described in step S211, when the image acquisition device is a monocular-head camera, the distance between the face and the origin of the camera coordinate system can be reflected by using the size of the face imaging size, so that the Z-axis coordinate can be replaced in the case where the distance cannot be measured, the extractability of the index value of the distance between the face and the origin can be ensured, and the purpose of identifying the virtual image person can be achieved.
In this embodiment, on the basis of the technical solution of the first aspect, a second possible design is specifically proposed for extracting a current face near-far degree index value when the image capturing device is a monocular-head camera, that is, extracting the current face near-far degree index value of the person from the face image, where the second possible design includes, but is not limited to, the following steps S221 to S223.
S221, extracting a first face imaging size parameter value from the face image.
And S222, identifying the age of the person according to the face image.
And S223, when the age is smaller than the preset age, correcting the first face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value to obtain a fourth face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the fourth face imaging size parameter value is used as the current face distance degree index value.
In the aforementioned step S221, the first face imaging size parameter value may be referred to as the aforementioned possible design one. In the step 222, the age of the person may be recognized by, but not limited to, importing the face image into a face recognition model that has been subjected to deep learning training to obtain the age of the person. The human face recognition model is an existing conventional model capable of recognizing ages and the like according to human face images through deep learning training and can be based on the ages and the like of people recognized by the human face images. In step S223, it is considered that the size parameter values of the adult human face, such as the head, the face or the interpupillary distance, are not greatly different from each other and can be used as a basic criterion, but when the field person and the testing person are an adult person and a child person, an error will be caused to the final virtual image person criterion result, so that the first face imaging size parameter value (i.e. when the field person is an adult person and the testing person is a child, the size parameter value is reduced, and when the field person is a child and the testing person is an adult, the size parameter value is enlarged) needs to be corrected according to the proportional relationship between the child face size standard parameter value and the adult face size standard parameter value to obtain a fourth face imaging size parameter value, and then the fourth face imaging size parameter value is used to replace the first face imaging size parameter value that may be designed as the current face distance degree index value, the consistency of the current face far and near degree index value and the face far and near degree index value of the testing personnel can be kept, errors caused to subsequent virtual image personnel judgment results are avoided, and judgment accuracy is further improved.
Therefore, through the second possible design described in the above steps S221 to S223, when the image acquisition device is a monocular-head camera, the distance from the face to the origin of the camera coordinate system can be reflected by using the size of the face imaging size corrected by the age factor, so that the Z-axis coordinate can be replaced in the case of no distance measurement, the extractability of the current face distance index value and the accuracy of the subsequent virtual image person determination result are ensured, and the purpose of determining the virtual image person is achieved. Furthermore, the first face imaging size parameter value may be corrected based on a gender factor, that is, the gender of the person is identified from the face image, and the child face size standard parameter value corresponds to the age and the gender and the adult face size standard parameter value corresponds to the gender at the time of correction.
On the basis of the technical solution of the first aspect, the present embodiment further specifically proposes another possible design three that extracts a current face near-far degree index value when the image capturing device is a monocular-head camera, that is, extracts the current face near-far degree index value of the person from the face image, including but not limited to the following steps S231 to S232.
And S231, extracting the current face attitude angle and the first face imaging size parameter value from the face image.
In step S231, a specific manner of extracting the current face pose angle of the person from the face image is an existing conventional manner, and may include, but is not limited to: (1) performing two-dimensional face key point detection on the face image; (2) matching the detected two-dimensional face key points with corresponding face key points in the three-dimensional face model; (3) solving a conversion relation matrix of the two-dimensional face key points and the corresponding three-dimensional face key points; (4) and solving three Euler angles (namely the current human face attitude angle: a pitch angle pitch, a yaw angle yaw and a roll angle) of the human face relative to the camera coordinate system according to the rotation relation matrix. The two-dimensional face key point detection is used for positioning key region positions of the face, including eyebrows, eyes, a nose, a mouth, a face contour and the like, of the face by giving a face image. The existing face key point detection methods are roughly divided into three types: based on an Active Shape Model (ASM) and an Active Appearance Model (AAM), a Cascaded Shape regression (CPR) and a deep learning method, a two-dimensional face mark frame, a two-dimensional face key region and/or a two-dimensional face key point and the like can be detected based on any one of the three existing methods. The first face imaging size parameter value may be found in the previous possible design one. In addition, the current face pose angle can be obtained by converting three Euler angles of the face relative to the camera coordinate system into three Euler angles of the face relative to other space coordinate systems through geometry.
And S232, calculating to obtain a second face imaging size parameter value according to the current face attitude angle and the first face imaging size parameter value, wherein the second face imaging size parameter value is used as the current face near-far degree index value, and the second face imaging size parameter value is a face imaging size parameter value corresponding to the face of the person when the face of the person is perpendicular to the optical axis of the image acquisition equipment under the condition that the face coordinate position is unchanged.
In the step S232, it is considered that the first face imaging size parameter value is obtained at an oblique viewing angle of the image acquisition device, and an error may be caused to a final virtual image person determination result, so that geometric space rotation transformation needs to be performed according to the current face attitude angle and the first face imaging size parameter value, and a second face imaging size parameter value at a main viewing angle is obtained, and then the second face imaging size parameter value is adopted to replace the first face imaging size parameter value which is possibly designed as the current face distance degree index value, so that an error caused to a subsequent virtual image person determination result can be avoided, and determination accuracy is further improved.
Therefore, through the third possible design described in the steps S231 to S232, when the image acquisition device is a monocular-head camera, the distance from the face to the origin of the camera coordinate system can be reflected by using the size of the face imaging size at the dominant viewing angle, so that the Z-axis coordinate can be replaced under the condition that distance cannot be measured, the extractability of the current face distance index value and the accuracy of the subsequent virtual image person judgment result are ensured, and the purpose of judging the virtual image person is achieved.
On the basis of the technical solution of the first aspect, the present embodiment further specifically proposes another possible design four that extracts a current face near-far degree index value when the image capturing device is a monocular-head camera, that is, extracts the current face near-far degree index value of the person from the face image, including but not limited to the following steps S241 to S244.
And S241, extracting the current face attitude angle and the first face imaging size parameter value from the face image.
And S242, calculating to obtain a second face imaging size parameter value according to the current face attitude angle and the first face imaging size parameter value, wherein the second face imaging size parameter value is a face imaging size parameter value corresponding to the face of the person when the face of the person is perpendicular to the optical axis of the image acquisition equipment under the condition that the face coordinate position is unchanged.
And S243, identifying the age of the person according to the face image.
In the step S243, the age of the person may be recognized by, but not limited to, importing the face image into a face recognition model that has been subjected to deep learning training to obtain the age of the person. The human face recognition model is an existing conventional model capable of recognizing ages and the like according to human face images through deep learning training and can be based on the ages and the like of people recognized by the human face images.
And S244, when the age is smaller than a preset age, correcting the second face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value to obtain a third face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the third face imaging size parameter value is used as the current face distance degree index value.
In step S244, considering that the relative difference between the primary face size parameter values of the adult, such as the head, the face, or the interpupillary distance, is not large, which can be used as a basic criterion, but when the field person and the testing person are an adult and a child, an error will be caused to the final virtual image person criterion result, so that the second face imaging size parameter value needs to be corrected according to the proportional relationship between the child face size standard parameter value and the adult face size standard parameter value (i.e. when the field person is an adult and the testing person is a child, the size parameter value is reduced, and when the field person is a child and the testing person is an adult, the size parameter value is enlarged), so as to obtain a third face imaging size parameter value, and then the third face imaging size parameter value is used to replace the second face imaging size parameter value that is possibly designed as the current face distance degree index value in the second possible design step two, the consistency of the current human face far and near degree index value of the personnel and the human face far and near degree index of the testing personnel can be kept, errors caused to subsequent virtual image personnel judgment results are avoided, and the judgment accuracy is further improved.
Therefore, by the fourth possible design described in the above steps S241 to S244, when the image capturing device is a monocular camera, the distance degree from the face to the origin of the camera coordinate system can be reflected by using the size of the face image under the dominant viewing angle and corrected by the age factor, so that the Z-axis coordinate can be replaced in the case where the distance cannot be measured, the extractability of the current face distance degree index value and the accuracy of the subsequent virtual image person determination result are ensured, and the purpose of determining the virtual image person is achieved. Furthermore, the second face imaging size parameter value may be corrected based on gender factors (for example, the interpupillary distance of an adult male is 60 mm to 73 mm, the interpupillary distance of an adult female is 55 mm to 68 mm, and different gender face imaging size parameter values may also be different), that is, the gender of the person is identified from the face image, and in the correction, the child face size standard parameter value corresponds to the age and the gender, and the adult face size standard parameter value corresponds to the gender.
On the basis of the technical solution of any one of the three to four possible designs, the present embodiment further specifically proposes a fifth possible design for how to calculate a second face imaging size parameter value, that is, the second face imaging size parameter value is calculated according to the current face pose angle and the first face imaging size parameter value, which includes, but is not limited to, the following steps S301 to S302.
And S301, leading the current face attitude angle into a trigonometric function for reflecting the face rotation onto a plane to obtain a rotation transformation coefficient, wherein the plane is vertical to the optical axis of the image acquisition equipment.
In step S301, rotating the face to a plane means that the face is rotated to face the lens of the image capturing device 1, and the trigonometric function can be obtained according to a conventional geometric analysis. In the actual calculation process, the change range of the human face posture before and after rotation is considered to be small, the influence of the pitch angle and the roll angle on the human face size parameter can be ignored, and then the rotation transformation coefficient can be calculated according to the following formula:
η=sec(θyaw)
in the formula, eta represents the rotation transformation coefficient, thetayawRepresenting the yaw angle in the current face pose angle, and sec () representing the secant function.
S302, calculating to obtain a second face imaging size parameter value according to the first face imaging size parameter value and the rotation transformation coefficient.
In step S302, a result of multiplying the first face imaging size parameter value by the rotation transformation coefficient may be specifically used as the second face imaging size parameter value, for example, a pupil distance value is multiplied by the rotation transformation coefficient.
Therefore, by the five possible designs described in the steps S301 to S302, a real and accurate face dominant view size parameter can be obtained, and the accuracy of subsequent discrimination is ensured.
On the basis of the first aspect and the technical solution that may be one to five, the present embodiment further specifically provides a sixth possible design method for how to accurately obtain the index range of the far and near degree of the current face, that is, determining the index range of the far and near degree of the current face corresponding to the two-dimensional coordinate data of the current face according to the corresponding relationship between the two-dimensional coordinate data of the face and the index range of the far and near degree of the face, including: and importing the current face two-dimensional coordinate data into a continuous curve fitting function, and calculating to obtain a current face far and near degree index range corresponding to the current face two-dimensional coordinate data, wherein the continuous curve fitting function is obtained by fitting according to multiple groups of actually measured face far and near degree index ranges and face two-dimensional coordinate data.
Therefore, by the aid of the described possible design six, after a limited set of actually measured human face near-far degree index ranges and human face two-dimensional coordinate data are obtained, the current human face near-far degree index ranges corresponding to different variables (namely the change values of the human face two-dimensional coordinate data) can be obtained in a refined mode through a curve function fitting technology, and accordingly, collection work can be reduced, and meanwhile accuracy of follow-up judgment is guaranteed.
As shown in fig. 5, a second aspect of the present embodiment provides a virtual device for implementing the method for identifying a virtual image person in any one of the first aspect or the possible designs of the first aspect, including an image acquisition unit, a data extraction unit, a range determination unit, and a virtual image identification unit, which are sequentially connected in a communication manner;
the image acquisition unit is used for acquiring a face image, wherein the face image comprises at least one person;
the data extraction unit is used for extracting current face two-dimensional coordinate data and a current face distance index value of the person from the face image, wherein the current face distance index value is used for representing the current distance from the face of the person to image acquisition equipment, and the image acquisition equipment is used for acquiring the face image;
the range determining unit is used for determining a current face far and near degree index range corresponding to the current face two-dimensional coordinate data according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range, wherein the face far and near degree index range is a face far and near degree index value interval which corresponds to the face two-dimensional coordinate data and can move in a given space;
and the virtual image judging unit is used for judging that the personnel is the virtual image personnel when the index value of the far and near degree of the current face is positioned outside the index range of the far and near degree of the current face.
In one possible design, the data extraction unit includes a first size parameter extraction subunit;
and the first size parameter extraction subunit is configured to extract face two-dimensional coordinate data from the face image, where the face two-dimensional coordinate data is used as the current face two-dimensional coordinate data.
In one possible design, the data extraction unit includes a first size parameter extraction subunit and a second size parameter extraction subunit;
the first size parameter extraction subunit is configured to extract a current face pose angle and a first face imaging size parameter value from the face image;
and the second size parameter extraction subunit is in communication connection with the first size parameter extraction subunit, and is configured to calculate a second face imaging size parameter value according to the current face pose angle and the first face imaging size parameter value, where the second face imaging size parameter value is used as the current face near-far degree index value, and the second face imaging size parameter value is a face imaging size parameter value corresponding to a face of the person when the face of the person is perpendicular to the optical axis of the image acquisition device under a condition that a face coordinate position is unchanged.
In one possible design, the data extraction unit includes a first size parameter extraction subunit, a second size parameter extraction subunit, an age extraction subunit, and a third size parameter extraction subunit;
the first size parameter extraction subunit is configured to extract a current face pose angle and a first face imaging size parameter value from the face image;
the second size parameter extraction subunit is in communication connection with the first size parameter extraction subunit, and is configured to calculate a second face imaging size parameter value according to the current face pose angle and the first face imaging size parameter value, where the second face imaging size parameter value is a face imaging size parameter value corresponding to a face of the person when the face is perpendicular to an optical axis of the image acquisition device under a condition that a face coordinate position is unchanged;
the age extracting subunit is used for identifying the age of the person according to the face image;
and the third size parameter extraction subunit is respectively in communication connection with the second size parameter extraction subunit and the age extraction subunit, and is configured to, when the age is smaller than a preset age, correct the second face imaging size parameter value according to a proportional relationship between a child face size standard parameter value and an adult face size standard parameter value to obtain a third face imaging size parameter value, where the child face size standard parameter value corresponds to the age, and the third face imaging size parameter value is used as the current face distance degree index value.
In one possible design, the second size parameter extraction sub-unit comprises a coefficient acquisition grandchild unit and a size calculation grandchild unit which are connected in a communication manner;
the coefficient obtaining unit is used for importing the current face attitude angle into a trigonometric function for reflecting the face rotation onto a plane to obtain a rotation transformation coefficient, wherein the plane is vertical to the optical axis of the image acquisition equipment;
and the size calculating unit is used for calculating to obtain the second face imaging size parameter value according to the first face imaging size parameter value and the rotation transformation coefficient.
In one possible design, the data extraction unit includes a first size parameter extraction subunit, an age extraction subunit, and a fourth size parameter extraction subunit;
the first size parameter extraction subunit is configured to extract a first face imaging size parameter value from the face image;
the age extracting subunit is used for identifying the age of the person according to the face image;
and the fourth size parameter extraction subunit is respectively in communication connection with the first size parameter extraction subunit and the age extraction subunit, and is used for correcting the first face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value when the age is smaller than the preset age to obtain a fourth face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the fourth face imaging size parameter value is used as the current face distance degree index value.
In a possible design, the range determining unit is specifically configured to import the current face two-dimensional coordinate data into a continuous curve fitting function, and calculate to obtain a current face near-far degree index range corresponding to the current face two-dimensional coordinate data, where the continuous curve fitting function is obtained by fitting according to multiple sets of actually measured face near-far degree index ranges and face two-dimensional coordinate data.
For the working process, working details, and technical effects of the foregoing device provided in the second aspect of this embodiment, reference may be made to the method for identifying a virtual image person in any one of the first aspect and the first aspect, which is not described herein again.
As shown in fig. 6, a third aspect of the present embodiment provides a computer device for executing the method for identifying a virtual image person in any one of the possible designs of the first aspect or the first aspect, where the computer device includes a memory and a processor, the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the method for identifying a virtual image person in any one of the possible designs of the first aspect or the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), and/or a First-in Last-out (FILO), and the like; the processor may not be limited to the microprocessor of the model number employing the STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, working details, and technical effects of the foregoing computer device provided in the third aspect of this embodiment, reference may be made to the method for identifying a virtual image person in any one of the first aspect and the first aspect, which is not described herein again.
A fourth aspect of the present embodiment provides a computer-readable storage medium storing instructions for implementing the method for identifying a virtual person according to any one of the first aspect and the possible designs of the method for identifying a virtual person according to the first aspect, that is, the computer-readable storage medium stores instructions that, when executed on a computer, perform the method for identifying a virtual person according to any one of the first aspect and the possible designs of the first aspect. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For a working process, working details, and technical effects of the foregoing computer-readable storage medium provided in the fourth aspect of this embodiment, reference may be made to the first aspect or any one of the methods that may be designed for distinguishing a virtual image person in the first aspect, and details are not described herein again.
A fifth aspect of the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for identifying a virtual image person as described in the first aspect or any one of the first aspects. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
The embodiments described above are merely illustrative, and may or may not be physically separate, if referring to units illustrated as separate components; if reference is made to a component displayed as a unit, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications may be made to the embodiments described above, or equivalents may be substituted for some of the features described. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (10)

1. A method for identifying a person in a virtual image, comprising:
acquiring a face image, wherein the face image comprises at least one person;
extracting current face two-dimensional coordinate data and a current face distance index value of the person from the face image, wherein the current face distance index value is used for representing the current distance from the face of the person to image acquisition equipment, and the image acquisition equipment is used for acquiring the face image;
determining a current face far and near degree index range corresponding to the current face two-dimensional coordinate data according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range, wherein the face far and near degree index range is a face far and near degree index value interval which corresponds to the face two-dimensional coordinate data and can move in a given space;
and when the current face far and near degree index value is positioned outside the current face far and near degree index range, judging the personnel to be virtual image personnel.
2. The method of claim 1, wherein extracting the current face distance index value of the person from the face image comprises:
and extracting a first face imaging size parameter value from the face image, wherein the first face imaging size parameter value is used as the current face distance degree index value.
3. The method of claim 1, wherein extracting the current face distance index value of the person from the face image comprises:
extracting a current face attitude angle and a first face imaging size parameter value from the face image;
and calculating to obtain a second face imaging size parameter value according to the current face attitude angle and the first face imaging size parameter value, wherein the second face imaging size parameter value is used as the index value of the far and near degree of the current face, and the second face imaging size parameter value is a face imaging size parameter value corresponding to the face of the person when the face of the person is perpendicular to the optical axis of the image acquisition equipment under the condition that the face coordinate position is unchanged.
4. The method of claim 1, wherein extracting the current face distance index value of the person from the face image comprises:
extracting a current face attitude angle and a first face imaging size parameter value from the face image;
calculating to obtain a second face imaging size parameter value according to the current face attitude angle and the first face imaging size parameter value, wherein the second face imaging size parameter value is a face imaging size parameter value corresponding to the face of the person when the face is perpendicular to the optical axis of the image acquisition equipment under the condition that the face coordinate position is unchanged;
identifying the age of the person according to the face image;
and when the age is smaller than the preset age, correcting the second face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value to obtain a third face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the third face imaging size parameter value is used as the current face distance degree index value.
5. The method of claim 3 or 4, wherein calculating a second face imaging size parameter value based on the current face pose angle and the first face imaging size parameter value comprises:
leading the current face attitude angle into a trigonometric function for reflecting the rotation of the face onto a plane to obtain a rotation transformation coefficient, wherein the plane is vertical to the optical axis of the image acquisition equipment;
and calculating to obtain the second face imaging size parameter value according to the first face imaging size parameter value and the rotation transformation coefficient.
6. The method of claim 1, wherein extracting the current face distance index value of the person from the face image comprises:
extracting a first face imaging size parameter value from the face image;
identifying the age of the person according to the face image;
and when the age is smaller than the preset age, correcting the first face imaging size parameter value according to the proportional relation between the child face size standard parameter value and the adult face size standard parameter value to obtain a fourth face imaging size parameter value, wherein the child face size standard parameter value corresponds to the age, and the fourth face imaging size parameter value is used as the current face distance degree index value.
7. The method of claim 1, wherein determining a current face near-far degree index range corresponding to the current face two-dimensional coordinate data according to a corresponding relationship between the face two-dimensional coordinate data and the face near-far degree index range comprises:
and importing the current face two-dimensional coordinate data into a continuous curve fitting function, and calculating to obtain a current face far and near degree index range corresponding to the current face two-dimensional coordinate data, wherein the continuous curve fitting function is obtained by fitting according to multiple groups of actually measured face far and near degree index ranges and face two-dimensional coordinate data.
8. The device for distinguishing the virtual image personnel is characterized by comprising an image acquisition unit, a data extraction unit, a range determination unit and a virtual image distinguishing unit which are sequentially in communication connection;
the image acquisition unit is used for acquiring a face image, wherein the face image comprises at least one person;
the data extraction unit is used for extracting current face two-dimensional coordinate data and a current face distance index value of the person from the face image, wherein the current face distance index value is used for representing the current distance from the face of the person to image acquisition equipment, and the image acquisition equipment is used for acquiring the face image;
the range determining unit is used for determining a current face far and near degree index range corresponding to the current face two-dimensional coordinate data according to the corresponding relation between the face two-dimensional coordinate data and the face far and near degree index range, wherein the face far and near degree index range is a face far and near degree index value interval which corresponds to the face two-dimensional coordinate data and can move in a given space;
and the virtual image judging unit is used for judging that the personnel is the virtual image personnel when the index value of the far and near degree of the current face is positioned outside the index range of the far and near degree of the current face.
9. A computer device comprising a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the method for identifying a virtual image person according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon instructions for performing the method of identifying a virtual image person of any one of claims 1 to 7 when the instructions are run on a computer.
CN202010762179.XA 2020-07-31 2020-07-31 Method and device for distinguishing virtual image personnel and computer equipment Active CN111898553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010762179.XA CN111898553B (en) 2020-07-31 2020-07-31 Method and device for distinguishing virtual image personnel and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010762179.XA CN111898553B (en) 2020-07-31 2020-07-31 Method and device for distinguishing virtual image personnel and computer equipment

Publications (2)

Publication Number Publication Date
CN111898553A true CN111898553A (en) 2020-11-06
CN111898553B CN111898553B (en) 2022-08-09

Family

ID=73183072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010762179.XA Active CN111898553B (en) 2020-07-31 2020-07-31 Method and device for distinguishing virtual image personnel and computer equipment

Country Status (1)

Country Link
CN (1) CN111898553B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016482A (en) * 2020-08-31 2020-12-01 成都新潮传媒集团有限公司 Method and device for distinguishing false face and computer equipment
CN112418036A (en) * 2020-11-12 2021-02-26 广州市保伦电子有限公司 Video conference order analysis method and processing terminal
CN112990068A (en) * 2021-03-31 2021-06-18 辽宁华盾安全技术有限责任公司 Elevator passenger counting method and system based on deep learning
CN113115086A (en) * 2021-04-16 2021-07-13 安乐 Method for collecting elevator media viewing information based on video sight line identification

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61147308A (en) * 1984-12-20 1986-07-05 Matsushita Electric Ind Co Ltd Teaching device
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN105243386A (en) * 2014-07-10 2016-01-13 汉王科技股份有限公司 Face living judgment method and system
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN105740780A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN105740781A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional human face in-vivo detection method and device
CN106355147A (en) * 2016-08-26 2017-01-25 张艳 Acquiring method and detecting method of live face head pose detection regression apparatus
CN108416291A (en) * 2018-03-06 2018-08-17 广州逗号智能零售有限公司 Face datection recognition methods, device and system
CN108985220A (en) * 2018-07-11 2018-12-11 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN109413359A (en) * 2017-08-16 2019-03-01 华为技术有限公司 Camera tracking method, device and equipment
CN110199296A (en) * 2019-04-25 2019-09-03 深圳市汇顶科技股份有限公司 Face identification method, processing chip and electronic equipment
US20190327124A1 (en) * 2012-12-05 2019-10-24 Origin Wireless, Inc. Method, apparatus, and system for object tracking and sensing using broadcasting
CN110532933A (en) * 2019-08-26 2019-12-03 淮北师范大学 A kind of living body faces detection head pose returns the acquisition methods and detection method of device
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium
CN110956065A (en) * 2019-05-11 2020-04-03 初速度(苏州)科技有限公司 Face image processing method and device for model training
CN111160178A (en) * 2019-12-19 2020-05-15 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111209870A (en) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 Binocular living body camera rapid registration method, system and device thereof

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61147308A (en) * 1984-12-20 1986-07-05 Matsushita Electric Ind Co Ltd Teaching device
US20190327124A1 (en) * 2012-12-05 2019-10-24 Origin Wireless, Inc. Method, apparatus, and system for object tracking and sensing using broadcasting
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN105243386A (en) * 2014-07-10 2016-01-13 汉王科技股份有限公司 Face living judgment method and system
CN105023010A (en) * 2015-08-17 2015-11-04 中国科学院半导体研究所 Face living body detection method and system
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN105740781A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Three-dimensional human face in-vivo detection method and device
CN105740780A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Method and device for human face in-vivo detection
CN106355147A (en) * 2016-08-26 2017-01-25 张艳 Acquiring method and detecting method of live face head pose detection regression apparatus
CN109413359A (en) * 2017-08-16 2019-03-01 华为技术有限公司 Camera tracking method, device and equipment
CN108416291A (en) * 2018-03-06 2018-08-17 广州逗号智能零售有限公司 Face datection recognition methods, device and system
CN108985220A (en) * 2018-07-11 2018-12-11 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN110199296A (en) * 2019-04-25 2019-09-03 深圳市汇顶科技股份有限公司 Face identification method, processing chip and electronic equipment
CN110956065A (en) * 2019-05-11 2020-04-03 初速度(苏州)科技有限公司 Face image processing method and device for model training
CN110598571A (en) * 2019-08-15 2019-12-20 中国平安人寿保险股份有限公司 Living body detection method, living body detection device and computer-readable storage medium
CN110532933A (en) * 2019-08-26 2019-12-03 淮北师范大学 A kind of living body faces detection head pose returns the acquisition methods and detection method of device
CN111160178A (en) * 2019-12-19 2020-05-15 深圳市商汤科技有限公司 Image processing method and device, processor, electronic device and storage medium
CN111209870A (en) * 2020-01-09 2020-05-29 杭州涂鸦信息技术有限公司 Binocular living body camera rapid registration method, system and device thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUO-SHING HUANG等: ""Recognizing and Locating of Objects Using Binocular Vision System"", 《IEEE》 *
ZHANG LEI等: ""Research on the Technology of Extracting 3D Face Feature Points on Basis of Binocular Vision"", 《IEEE》 *
杨勃等: "基于形变模型的弱纹理人脸图像合成方法改进", 《计算机仿真》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016482A (en) * 2020-08-31 2020-12-01 成都新潮传媒集团有限公司 Method and device for distinguishing false face and computer equipment
CN112418036A (en) * 2020-11-12 2021-02-26 广州市保伦电子有限公司 Video conference order analysis method and processing terminal
CN112990068A (en) * 2021-03-31 2021-06-18 辽宁华盾安全技术有限责任公司 Elevator passenger counting method and system based on deep learning
CN113115086A (en) * 2021-04-16 2021-07-13 安乐 Method for collecting elevator media viewing information based on video sight line identification
CN113115086B (en) * 2021-04-16 2023-09-19 浙江闪链科技有限公司 Method for collecting elevator media viewing information based on video line-of-sight identification

Also Published As

Publication number Publication date
CN111898553B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN111898553B (en) Method and device for distinguishing virtual image personnel and computer equipment
CN108875524B (en) Sight estimation method, device, system and storage medium
CN102657532B (en) Height measuring method and device based on body posture identification
CN102520796B (en) Sight tracking method based on stepwise regression analysis mapping model
CN109690553A (en) The system and method for executing eye gaze tracking
CN106796449A (en) Eye-controlling focus method and device
WO2022121283A1 (en) Vehicle key point information detection and vehicle control
JP5163982B2 (en) Gaze measurement device, gaze measurement program, gaze measurement method, and display for gaze measurement device
CN111028271B (en) Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection
CN104978548A (en) Visual line estimation method and visual line estimation device based on three-dimensional active shape model
KR20240074755A (en) Eye direction tracking method and device
US20210124917A1 (en) Method for automatically generating hand marking data and calculating bone length
CN113842172B (en) Pharyngeal rear wall visual touch recognition device based on template matching and arithmetic averaging
US11386578B2 (en) Image labeling system of a hand in an image
JP2008204200A (en) Face analysis system and program
CN111898552B (en) Method and device for distinguishing person attention target object and computer equipment
CN111339982A (en) Multi-stage pupil center positioning technology implementation method based on features
CN108537103B (en) Living body face detection method and device based on pupil axis measurement
Rocca et al. Head pose estimation by perspective-n-point solution based on 2d markerless face tracking
CN116051631A (en) Light spot labeling method and system
CN109815823A (en) Data processing method and Related product
CN110007764B (en) Gesture skeleton recognition method, device and system and storage medium
CN110647790A (en) Method and device for determining gazing information
CN116758006B (en) Scaffold quality detection method and device
CN111080712B (en) Multi-camera personnel positioning, tracking and displaying method based on human body skeleton detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant