CN115482564A - Face recognition method, device, equipment and computer storage medium - Google Patents

Face recognition method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN115482564A
CN115482564A CN202110604756.7A CN202110604756A CN115482564A CN 115482564 A CN115482564 A CN 115482564A CN 202110604756 A CN202110604756 A CN 202110604756A CN 115482564 A CN115482564 A CN 115482564A
Authority
CN
China
Prior art keywords
preset
face recognition
posture
camera
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110604756.7A
Other languages
Chinese (zh)
Inventor
杨军
饶天珉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202110604756.7A priority Critical patent/CN115482564A/en
Publication of CN115482564A publication Critical patent/CN115482564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face recognition method, a face recognition device, face recognition equipment and a computer storage medium, and belongs to the technical field of computers. The method comprises the following steps: the method comprises the steps of shooting a first image of a preset sample of a target posture through a camera, obtaining a target face recognition algorithm matched with the current posture of the camera according to the first posture and the target posture of the preset sample in the first image, then carrying out face recognition according to the target face recognition algorithm, so as to determine face recognition algorithms of the camera in different postures, enable the camera to carry out face recognition in multiple postures, solve the problem that the difficulty of face recognition is high due to the fact that the camera needs to be placed according to the preset posture in the related technology, and achieve the effect of reducing the difficulty of face recognition.

Description

Face recognition method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a face recognition method, an apparatus, a device, and a computer storage medium.
Background
The information pushing device is a multimedia device which is used in public places where people flow is converged, such as superstores, supermarkets and the like, and publishes information such as businesses, financial institutions, entertainment and the like through a display screen. At present, an information pushing device can enable an advertiser to push corresponding advertisement information according to different audiences in front of a display screen through a face recognition technology.
A face recognition method comprises the steps of firstly determining a preset installation posture of a camera and a face recognition algorithm corresponding to the preset installation posture, then installing the camera according to the preset installation posture, and carrying out face recognition according to the corresponding face recognition algorithm.
In the method, the camera needs to be placed according to a preset posture so as to be adapted to a corresponding face recognition algorithm, and therefore the difficulty of face recognition is high.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device, face recognition equipment and a computer storage medium. The technical scheme is as follows:
according to an aspect of the present application, there is provided a face recognition method, including:
shooting a first image of a preset sample of the target posture through a camera;
determining a first pose of the preset sample in the first image;
determining the current posture of the camera according to the first posture and the target posture;
determining a target face recognition algorithm matched with the current posture of the camera;
and carrying out face recognition through the target face recognition algorithm.
Optionally, the determining a current posture of the camera according to the first posture and the target posture includes:
comparing the first posture with at least two preset postures, wherein the at least two preset postures correspond to at least two face recognition algorithms one by one, and the at least two preset postures are images of a preset sample of the target posture, which are shot by the camera in at least two different postures;
when a target preset gesture matched with the first gesture exists in the at least two preset gestures, determining the target preset gesture as the current gesture of the camera;
the target face recognition algorithm for determining the current posture of the camera is as follows:
and determining the algorithm corresponding to the target preset posture as a target face recognition algorithm matched with the current posture of the camera.
Optionally, before comparing the first posture with at least two preset postures, the method further includes:
shooting preset samples of the target postures in at least two postures through the camera to obtain at least two sample images;
determining the at least two preset postures of the preset sample in the at least two sample images;
and the at least two preset postures correspond to the at least two face recognition algorithms one by one.
Optionally, the determining the first pose of the preset sample in the first image includes:
performing image recognition on the first image to obtain coordinates of four vertexes of the preset sample in the first image;
and determining the first posture of the preset sample according to the coordinates of the four vertexes.
Optionally, determining the first pose of the preset sample according to the coordinates of the four vertices includes:
determining the length direction of the preset sample according to the coordinates of the four vertexes;
determining the first pose of the preset sample based on the length direction.
Optionally, the at least two preset postures include a horizontal posture and a vertical posture of the preset sample.
Optionally, the determining a first pose of the preset sample in the first image includes:
performing image recognition on the first image to obtain an image of the preset sample in the first image;
and detecting the posture of the image of the preset sample in the first image to determine the first posture.
Optionally, the camera is installed on a screen, and after the face recognition is performed through the target face recognition algorithm, the method further includes:
when face information is detected, acquiring recommendation information according to the face information;
and displaying the recommendation information through the screen.
Optionally, the face information includes one or more of age, gender, expression, and duration of time that a face is detected.
Optionally, the capturing, by a camera, a first image of a preset sample of the target pose includes:
after the target instruction is received, a first image of a preset sample of the target posture is shot through the camera.
According to another aspect of the present application, there is provided a face recognition apparatus including:
the image acquisition module is used for shooting a first image of a preset sample of the target posture through a camera;
a first pose determination module, configured to determine a first pose of the preset sample in the first image;
the second attitude determination module is used for determining the current attitude of the camera according to the first attitude and the target attitude;
the first algorithm determining module is used for determining a target face recognition algorithm matched with the current posture of the camera;
and the face recognition module is used for carrying out face recognition through the target face recognition algorithm.
Optionally, the face recognition apparatus further includes:
the gesture comparison module is used for comparing the first gesture with at least two preset gestures, the at least two preset gestures correspond to at least two face recognition algorithms one by one, and the at least two preset gestures are the gestures of a preset sample in the target gesture, which is shot by the camera in at least two different gestures;
the third gesture determining module is used for determining a target preset gesture matched with the first gesture as the current gesture of the camera when the target preset gesture exists in the at least two preset gestures;
and the second algorithm determining module is used for determining the algorithm corresponding to the target preset posture as the target face recognition algorithm matched with the current posture of the camera.
According to another aspect of the present application, there is provided a face recognition apparatus, including a processor, a camera, and a memory;
the camera and the memory are both connected to the processor, at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the face recognition method according to any one of the above aspects.
Optionally, the device further comprises a screen.
According to another aspect of the present application, there is provided a computer storage medium having at least one instruction, at least one program, code set, or set of instructions stored therein, which is loaded and executed by a processor to implement a face recognition method according to any one of the above aspects.
The beneficial effects that technical scheme that this application embodiment brought include at least:
the method comprises the steps of shooting a first image of a preset sample of a target gesture through a camera, obtaining a target face recognition algorithm matched with the current gesture of the camera according to the first gesture and the target gesture of the preset sample in the first image, and then carrying out face recognition according to the target face recognition algorithm, so that the face recognition algorithms of the camera in different gestures can be determined, the camera can carry out face recognition under various gestures, the problem that the difficulty of face recognition is high due to the fact that the camera needs to be placed according to the preset gesture in the related technology can be solved, and the effect of reducing the difficulty of face recognition is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a face recognition method provided in an embodiment of the present application;
fig. 2 is a flowchart of a face recognition method according to an embodiment of the present application;
fig. 3 is a flowchart of another face recognition method provided in the embodiment of the present application;
FIG. 4 is a schematic diagram of the method of FIG. 3 in which a camera takes a preset sample of the pose of the object in one pose;
FIG. 5 is a schematic diagram of the method of FIG. 3 in which the camera takes a preset sample of the pose of the object in another pose;
FIG. 6 is a schematic diagram of two preset poses in two sample images in the method shown in FIG. 3;
FIG. 7 is a schematic diagram of the camera acquiring a first sample in a first camera pose in the method of FIG. 3;
FIG. 8 is a schematic diagram of the method of FIG. 3 in which the camera takes a second sample at a second camera pose;
FIG. 9 is a flow chart of one method illustrated in FIG. 3 for determining a first pose of a preset sample in a first image;
FIG. 10 is a schematic diagram of the coordinates of four vertices of a predetermined sample in the first image in the method of FIG. 3;
FIG. 11 is a flow chart of another method of determining a first pose of a preset sample in a first image shown in FIG. 3;
fig. 12 is a block diagram of a face recognition apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of another face recognition apparatus according to an embodiment of the present application;
fig. 14 is a block diagram of a structure of a face recognition device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
The face recognition may refer to a process of acquiring an image and acquiring face information in the image through a face recognition algorithm. The face images with different postures can correspond to different face recognition algorithms, and if the postures of the faces in the images are not matched with the face recognition algorithms, the recognition rate of the face recognition algorithms is also sharply reduced.
At present, different placing positions of a camera may cause different camera shooting postures of the camera, and further cause different postures of human faces in the obtained images. For example, when the camera takes a standing pedestrian in a horizontal shooting posture, the acquired image is a rectangular transverse image, and when the camera takes a standing pedestrian in a vertical shooting posture, the acquired image is still a rectangular transverse image, but the postures of the faces in the images acquired by the camera in different shooting postures are different, and the faces in different postures need to use different face recognition algorithms. However, the image transmitted from the camera to the processor does not include the information of the pose of the camera, and the processor is difficult to determine the current pose of the camera according to the image, so that a corresponding face recognition algorithm cannot be obtained. Therefore, in the current face recognition method, the camera needs to be placed according to a preset posture so as to adapt to a fixed face algorithm, and the difficulty of face recognition is large.
The embodiment of the application provides a face recognition method, which can solve the problems in the related technology.
Fig. 1 is a schematic diagram of an implementation environment of a face recognition method according to an embodiment of the present application, where the implementation environment may include a terminal 11, a screen 12, and a processor.
The terminal 11 may include a camera.
The screen 12 may be a digital signage, which is an information pushing device that can distribute information such as public information, advertisement information, and entertainment information in a public place where people flow converges by means of image display (or by means of sound playing). The advertisement information playing method can play advertisement information for specific crowds at specific places and specific time periods. The processor may be located in the camera, in the screen, or may be located outside the screen and act as a common processor for multiple screens within a site.
The terminal 11 and the screen 12 may be connected to the processor in a wired or wireless manner, and the terminal 11 may be mounted on the screen 12 and connected to the screen 12 in a wired manner.
The application scenario of the embodiment of the application may include:
in markets, airports, stations, subway stations and other public places where people flow and gather, face recognition is carried out through a camera on a digital sign, the digital sign can comprise governments, enterprise building digital signs, medical industry digital signs, hotel multimedia digital signs and elevator multimedia digital signs, and the camera is installed on the digital sign, so that information of people passing through the digital sign can be automatically detected according to a target face recognition algorithm matched with the current posture of the camera, and an advertiser or a manager can issue advertisements, government bulletins and other information according to different audiences in front of the digital sign.
Fig. 2 is a flowchart of a face recognition method according to an embodiment of the present application. The method may be applied to a processor in the implementation environment shown in fig. 1. The method comprises the following steps:
step 201, shooting a first image of a preset sample of the target gesture through a camera.
Wherein, the preset sample can be placed according to the target posture by the staff who installs the camera.
Step 202, determining a first posture of a preset sample in the first image.
And 203, determining the current posture of the camera according to the first posture and the target posture.
And step 204, determining a target face recognition algorithm matched with the current posture of the camera.
And step 205, carrying out face recognition through a target face recognition algorithm.
To sum up, the embodiment of the present application provides a face recognition method, a first image of a preset sample of a target posture is shot through a camera, and according to the first posture and the target posture of the preset sample in the first image, after a target face recognition algorithm matched with a current posture of the camera is obtained, face recognition is performed according to the target face recognition algorithm, so that face recognition algorithms of the camera in different postures can be determined, the camera can perform face recognition in multiple postures, the problem that the difficulty of face recognition is large due to the fact that the camera needs to be placed according to the preset posture in the related art can be solved, and the effect of reducing the difficulty of face recognition is achieved.
Fig. 3 is a flowchart of another face recognition method according to an embodiment of the present application. The method may include the following steps:
step 301, shooting preset samples of the target posture through a camera in at least two postures to obtain at least two sample images.
The preset sample of the target gesture can be set in front of the camera, and the preset sample is shot respectively in at least two different gestures of the camera. Optionally, the preset sample may include an object with a rectangular plane, such as a book or a ruler, and the plane of the preset sample facing the camera may be rectangular.
As shown in fig. 4, fig. 4 is a schematic diagram of the camera shooting a preset sample of the target gesture in one gesture in the method shown in fig. 3. The preset sample 41 may be a ruler, and the target posture of the preset sample 41 may be that the length direction of the preset sample is a horizontal direction (the horizontal direction is a direction parallel to the ground), and the length direction of the ruler is a length direction (or called a long side direction) of a rectangular plane of the ruler facing the camera.
The camera 42 may be installed at an edge of the display surface of the digital signage 43 far from the ground, the display surface of the digital signage 43 is rectangular, and after the digital signage 43 is placed at a preset position, the edge of the display surface far from the ground is parallel to the horizontal direction. The camera 42 is installed at this position so that the acquired face image is relatively complete.
In the setting manner shown in fig. 4, a preset sample of the target pose can be taken by the camera in the first camera pose, so as to obtain a first sample image. Since the shape of an image sensor (also referred to as a photosensitive element) in a camera is generally rectangular, it acquires an image that is generally rectangular. The pose of the camera can thus be determined by the pose of the long side of the image sensor. For example, the first imaging posture of the camera may refer to a posture in which the longitudinal direction of the image sensor in the camera is parallel to the ground, or may refer to a posture in which the longitudinal direction of the image sensor is perpendicular to the ground.
As shown in fig. 5, fig. 5 is a schematic diagram of the camera shooting a preset sample of the target pose in another pose in the method shown in fig. 3. The preset sample may be the straight edge 41, and the target posture of the preset sample may be that the length direction of the straight edge is a horizontal direction. The camera 42 can be installed at one side edge of the display surface of the digital sign 43 in the vertical horizontal direction, the display surface of the digital sign 43 is rectangular, after the digital sign 43 is placed at a preset position, the display surface has two edges perpendicular to the horizontal direction, and the camera is installed at the position, so that the condition that the acquired face image is incomplete due to the fact that the camera 42 is too high or too low can be avoided.
In the arrangement shown in fig. 4, the preset sample of the target posture can be captured by the camera in the second imaging posture, so as to obtain a second sample image. The first image pickup posture is perpendicular to the second image pickup posture.
Step 302, at least two preset postures of preset samples in at least two sample images are determined.
Wherein the at least two preset postures may include a horizontal posture and a vertical posture of the preset sample. The camera is in the shape of the image sensor, the obtained image is generally rectangular, the horizontal posture of the preset sample can refer to the posture that the length direction of the image is parallel to the horizontal plane, and the vertical posture can refer to the posture that the length direction of the image is perpendicular to the horizontal plane.
After the images of two samples are acquired through the cameras in the two postures, image recognition can be carried out on the two sample images, so that coordinates of four vertexes of a preset sample in at least two sample images are acquired respectively, a first preset posture and a second preset posture of the preset sample are determined according to the coordinates of the four vertexes respectively, the first preset posture can be a horizontal posture, and the second preset posture can be a vertical posture.
As shown in fig. 6, fig. 6 is a schematic diagram of two preset postures in two sample images in the method shown in fig. 3, wherein the x direction is a horizontal direction, and the y direction is a vertical direction. The first preset posture in the first sample image 61 is that the preset sample 41 is in a horizontal posture, and the second preset posture in the second sample image 62 is that the preset sample 41 is in a vertical posture. In the two sample images, the postures of the preset samples are different, which is caused by the different shooting postures of the cameras.
And 303, corresponding the at least two preset postures to the at least two face recognition algorithms one to one.
The face recognition algorithm can be adjusted according to different postures, so that the face recognition algorithms corresponding to the at least two preset postures are obtained. Optionally, the first preset posture and the second preset posture may be matched with different face recognition algorithms, that is, the cameras in different postures may correspond to different face algorithms, so that the cameras may perform face recognition in various postures.
As shown in fig. 7, fig. 7 is a schematic diagram of the camera acquiring the first sample in the first imaging pose in the method shown in fig. 3. The preset sample 41 in the first sample image 51 is a pedestrian in a target posture, and the target posture is a standing posture. When the camera 42 photographs a standing pedestrian in the first photographing posture, the first preset posture of the pedestrian in the acquired first sample image 51 is a horizontal posture, which refers to a posture in which the height direction of the pedestrian is perpendicular to the long side of the first sample image, and the first preset posture may correspond to a face recognition algorithm.
Fig. 8 is a schematic diagram of the camera acquiring the second sample in the second imaging pose in the method shown in fig. 3, as shown in fig. 8. The preset sample 41 in the second sample image 52 is a pedestrian in a target posture, and the target posture is a standing posture. When the camera 42 photographs a standing pedestrian in the second photographing posture, the second preset posture of the pedestrian in the second sample image 52 is the vertical posture, which refers to the posture that the height direction of the pedestrian is parallel to the long side of the second sample image, and the second preset posture may correspond to another face recognition algorithm.
And step 304, after receiving the target instruction, shooting a first image of a preset sample of the target posture through the camera.
The target instruction may be issued by an operator, or may be issued by the camera under a predetermined condition (i.e., issuing the target instruction to the camera). For example, an operator may finish the installation of the camera and send a target instruction to the camera after placing the preset sample according to the target posture, or the camera may send the target instruction after being installed. The target instruction may be an instruction instructing the camera to photograph a preset sample.
As shown in fig. 4, the preset sample 41 may be placed in a target posture by a worker who installs the camera 42, the preset sample may be placed on a bracket 44, and the bracket 44 may enable the preset sample 41 to be in a horizontal state or a vertical state. After the installation position of the camera 42 is changed and the preset sample is placed according to the target posture, the target instruction is sent to the camera 42, and after the camera 42 receives the target instruction, the preset sample 41 in the target posture is shot, so that the first image of the preset sample 41 in the target posture can be obtained.
And 305, determining a first posture of a preset sample in the first image.
After the camera acquires the first image of the preset sample of the target gesture in the current shooting gesture, the first gesture of the preset sample in the first image can be identified for comparison with at least two preset gestures.
The manner of determining the first pose of the preset sample in the first image may include the following two:
in a first mode, if the preset sample is a rectangle, the first posture of the preset sample in the first image can be determined by identifying four vertexes of the rectangle in the first image. As shown in fig. 9, step 305 may include the following two substeps:
and a substep 3051 of performing image recognition on the first image to obtain coordinates of four vertexes of a preset sample in the first image.
Because the preset sample is rectangular, the coordinates of the four vertexes of the rectangular preset sample in the image can be obtained through the image recognition technology. The technology can realize the identification of regular geometric figures by extracting, filtering, graying, edge detection and other modes of the image, and can accurately determine the coordinate parameters of the figures.
As shown in fig. 10, fig. 10 is a schematic diagram of coordinates of four vertexes of the preset sample in the first image in the method shown in fig. 3, and exemplarily, coordinates of a first vertex A1 of the preset sample 711 in the first image 71 are (2, 5), coordinates of a second vertex A2 are (10, 5), coordinates of a third vertex A3 are (2, 7), and coordinates of a fourth vertex A4 are (10, 7).
And a substep 3052 of determining a first posture of the preset sample according to the coordinates of the four vertexes.
The preset sample in the first image is a regular rectangle, four vertexes of the rectangle can be used for representing the posture of the preset sample, and the posture of the preset sample in the first image can be obtained according to vertex coordinates.
Sub-step 3052 may include:
1) And determining the length direction of the preset sample according to the coordinates of the four vertexes.
The length direction of the preset sample may refer to a long side direction of the rectangle. As shown in fig. 10, it may be determined that a connection line between the first vertex A1 and the second vertex A2 or a connection line between the third vertex A3 and the fourth vertex A4 is a long side of a rectangle according to a difference between coordinates of four vertices of the preset sample 41 in the first image 71, that is, a connection line direction between the first vertex A1 and the second vertex A2 or a connection line direction between the third vertex A3 and the fourth vertex A4 is a length direction f1 of the preset sample.
And the connecting direction of the first vertex and the second vertex is parallel to the connecting direction of the third vertex and the fourth vertex.
2) A first pose of the preset sample is determined based on the length direction.
The length direction of the preset sample in the first image can be used to represent a first posture of the preset sample, and as shown in fig. 10, the length direction f1 of the preset sample in the first image is a horizontal direction x, that is, the first posture of the preset sample 41 in the first image 71 is a horizontal posture.
In a second way, as shown in fig. 11, step 305 may include the following two substeps:
and a substep 3053, performing image recognition on the first image to obtain an image of a preset sample in the first image.
The image recognition is performed in the sub-step 3051, or in an alternative embodiment, a binarization process may be performed on the first image, where the binarization process may include: through threshold value selection, the gray level images with 256 brightness levels are binarized into a binarized first image with 0 gray level or 255 gray level, and outline detection of a preset sample is performed on the binarized first image to obtain an image of the preset sample in the first image.
And a substep 3054 of detecting the posture of the image of the preset sample in the first image, and determining the first posture.
The gesture detection means determining the first gesture of the preset sample in the first image according to the position of the image of the preset sample in the first image, where the position may include the image of the whole preset sample or a part of feature points of the image of the preset sample. After acquiring the image in the sample, the first pose may be determined in the following two ways.
The first mode is as follows: according to the acquired image of the preset sample, an external rectangular frame of the preset sample can be acquired, the position coordinates of each vertex included by the external rectangular frame in the first image are acquired, based on the position coordinates, the inclination angle of the preset sample selected by the external rectangular frame is determined, and the inclination angle can be used for reflecting the deviation angle of the preset sample relative to the target posture. The first posture, which is a preset sample, can be determined by the inclination angle.
The second mode is as follows: and extracting object feature points of a preset sample in the first image, wherein the first posture of the preset sample can be determined through the positions of the object feature points. The object feature point refers to a point where the image gray value changes drastically or a point where the curvature is large on the image edge (i.e., the intersection of two edges). The object feature points can reflect the essential features of the object image and can be used to identify the preset samples in the first image.
Step 306, comparing the first posture with at least two preset postures, and determining whether a target preset posture matched with the first posture exists in the at least two preset postures.
The at least two preset postures correspond to the at least two face recognition algorithms one to one, and the at least two preset postures are postures of the preset sample in the image of the preset sample in the target posture shot by the camera in at least two different postures.
In the step 302, a first preset posture and a second preset posture of a preset sample in two sample images can be obtained, where the first preset posture and the second preset posture respectively correspond to different shooting postures of the camera. As shown in fig. 6 and 10, the first posture of the preset sample is compared with the first preset posture and the second preset posture, that is, whether the length directions of the preset sample in the first image and the two sample images are consistent or not is compared, and if the length directions are consistent, it can be shown that the first posture is matched with the first preset posture or the second preset posture.
The matching of the first posture with the first preset posture or the second preset posture may include that an included angle between the first posture and the first preset posture or the second preset posture (the included angle may refer to an included angle between long side directions of preset samples in different postures) is less than 10 degrees.
If the included angle between the first posture and the first preset posture and the included angle between the first posture and the second preset posture are larger than or equal to 10 degrees, sending out reminding information through the camera, wherein the reminding information is used for correcting the installation deflection condition of the camera.
If there is a target preset gesture matching the first gesture in the at least two preset gestures, executing step 307, and if there is no target preset gesture matching the first gesture in the at least two preset gestures, adjusting the installation gesture of the camera, executing step 304.
And 307, when a target preset gesture matched with the first gesture exists in the at least two preset gestures, determining the target preset gesture as the current gesture of the camera.
If the first posture is consistent with any one of the at least two preset postures, the consistent preset posture is a target preset posture, and the target preset posture can be determined as the current posture of the camera.
And 308, determining an algorithm corresponding to the preset target gesture as a target face recognition algorithm matched with the current gesture of the camera.
The target preset posture is any one of at least two preset postures, and different preset postures correspond to different face recognition algorithms, namely the target preset posture has a face recognition algorithm matched with the target preset posture, and the face recognition algorithm can be a target face recognition algorithm matched with the current posture of the camera. As shown in fig. 6 and 10, if it is determined that the first posture is consistent with the first preset posture, the first preset posture is a target preset posture, the face recognition algorithm corresponding to the first preset posture is a target face recognition algorithm, and the target face recognition algorithm is called as a target face recognition algorithm matched with the current posture of the camera.
And 309, performing face recognition through a target face recognition algorithm.
When a person passes through or resides in the front of the digital signage, the face image of the person can be acquired through the camera, and the target face recognition algorithm carries out face recognition on the face image so as to acquire face information.
And 310, acquiring recommendation information according to the face information when the face information is detected.
The face information includes one or more of age, gender, expression, and duration of time that a face is detected. The advertiser can push corresponding advertisements according to the information of the person in front of the digital sign, for example, the person is female, and then the product information related to the female can be recommended. Or, the advertiser can collect data of the duration time of a plurality of faces in front of the digital sign, and can be used for judging whether the public accepts the advertisement, so that the advertisement form and content can be adjusted.
And 311, displaying the recommendation information through a screen.
After the screen of the digital signage receives the recommendation information, the screen of the digital signage can display the relevant recommendation information to the personnel in front of the screen to improve the adaptability of the content of the digital signage.
To sum up, the embodiment of the present application provides a face recognition method, a first image of a preset sample of a target posture is shot through a camera, and according to the first posture and the target posture of the preset sample in the first image, after a target face recognition algorithm matched with a current posture of the camera is obtained, face recognition is performed according to the target face recognition algorithm, so that face recognition algorithms of the camera in different postures can be determined, the camera can perform face recognition in multiple postures, the problem that the difficulty of face recognition is large due to the fact that the camera needs to be placed according to the preset posture in the related art can be solved, and the effect of reducing the difficulty of face recognition is achieved.
Fig. 12 is a block diagram of a structure of a face recognition apparatus according to an embodiment of the present application, where the face recognition apparatus 1200 includes:
an image acquisition module 1210 for capturing a first image of a preset sample of a target gesture by a camera;
a first pose determination module 1220, configured to determine a first pose of a preset sample in the first image;
the second posture determination module 1230 is configured to determine the current posture of the camera according to the first posture and the target posture;
the first algorithm determining module 1240 is used for determining a target face recognition algorithm matched with the current posture of the camera;
a face recognition module 1250 configured to perform face recognition through a target face recognition algorithm.
Optionally, as shown in fig. 13, fig. 13 is a block diagram of a structure of another face recognition apparatus provided in the embodiment of the present application, where the face recognition apparatus 1200 further includes:
the gesture comparison module 1260 is used for comparing the first gesture with at least two preset gestures, wherein the at least two preset gestures correspond to at least two human face recognition algorithms one by one, and the at least two preset gestures are gestures of a preset sample in a target gesture photographed by the camera in at least two different gestures;
a third posture determination module 1270, configured to determine, when a target preset posture matching the first posture exists in the at least two preset postures, the target preset posture as the current posture of the camera;
a second algorithm determining module 1280, configured to determine an algorithm corresponding to the preset target pose as a target face recognition algorithm matched with the current pose of the camera.
To sum up, the face recognition device that this application embodiment provided shoots the first image of the sample of predetermineeing of target gesture through the camera, and according to the first gesture and the target gesture of predetermineeing the sample in the first image, after obtaining the target face recognition algorithm that the current gesture of camera matches, carry out face identification according to target face recognition algorithm again, so can determine the face recognition algorithm of camera in different gestures, make the camera can carry out face recognition under multiple gesture, can solve among the correlation technique camera need put according to predetermined gesture, and then lead to the great problem of face recognition's the degree of difficulty, the effect that has reached the degree of difficulty that reduces face recognition.
The embodiment of the application provides a face recognition device, which comprises a processor, a camera and a memory.
The camera and the memory are both connected to the processor, at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement any of the face recognition methods according to the above aspects.
Optionally, the device further comprises a screen which may be used to play recommendation information obtained from the face recognition result, the recommendation information including an advertisement, and the processor and the memory may be provided at the screen.
Fig. 14 shows a block diagram of a face recognition device 1400 according to an exemplary embodiment of the present invention. The face recognition device 1400 may be: digital signage, smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer IV), MP4 players (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), notebook computers, or desktop computers. The face recognition device 1400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the face recognition device 1400 includes: a processor 1401, and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor TH01 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor TH01 may also include a main processor and a coprocessor, the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor TH01 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor TH01 may further include an AI (Artificial Intelligence) processor for processing a calculation operation related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement the face recognition method provided by the method embodiments herein.
In some embodiments, terminal 1400 may further optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, touch display 1405, camera 1406, audio circuitry 1407, positioning component 1408 and power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 can be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1405 may be one, providing the front panel of the terminal 1400; in other embodiments, the display 1405 may be at least two, respectively disposed on different surfaces of the terminal 1400 or in a foldable design; in still other embodiments, display 1405 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1400. Even more, the display 1405 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals and inputting the electric signals to the processor 1401 for processing, or inputting the electric signals to the radio frequency circuit 1404 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1400. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.
The positioning component 1408 serves to locate the current geographic position of the terminal 1400 for navigation or LBS (Location Based Service). The Positioning component 1408 may be based on the Positioning component of the GPS (Global Positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1409 is used to supply power to the various components of terminal 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, terminal 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: a fingerprint sensor 1411, an optical sensor 1412, and a proximity sensor 1413.
The fingerprint sensor 1411 is used for collecting a fingerprint of a user, and the processor 1401 is used for identifying the identity of the user according to the fingerprint collected by the fingerprint sensor 1411, or the fingerprint sensor 1411 is used for identifying the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Fingerprint sensor 1411 may be disposed on the front, back, or side of terminal 1400. When a physical key or vendor Logo is provided on the terminal 1400, the fingerprint sensor 1411 may be integrated with the physical key or vendor Logo.
The optical sensor 1412 is used to collect ambient light intensity. In one embodiment, processor 1401 can control the display brightness of touch display 1405 based on the ambient light intensity collected by optical sensor 1412. Specifically, when the ambient light intensity is high, the display luminance of the touch display 1405 is increased; when the ambient light intensity is low, the display brightness of the touch display 1405 is turned down. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1412.
Proximity sensors 1413, also known as distance sensors, are typically provided on the front panel of the terminal 1400. The proximity sensor 1413 is used to collect the distance between the user and the front surface of the terminal 1400. In one embodiment, when proximity sensor 1413 detects that the distance between the user and the front face of terminal 1400 is gradually decreased, processor 1401 controls touch display 1405 to switch from a bright screen state to a dark screen state; when the proximity sensor 1413 detects that the distance between the user and the front surface of the terminal 1400 is gradually increased, the processor 1401 controls the touch display 1405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 14 is not intended to be limiting of terminal 1400 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
An embodiment of the present application provides a computer storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the computer storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the face recognition method according to any one of the above aspects.
In this application, the terms "first," "second," "third," and "fourth" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless explicitly defined otherwise.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A face recognition method, comprising:
shooting a first image of a preset sample of the target posture through a camera;
determining a first pose of the preset sample in the first image;
determining the current posture of the camera according to the first posture and the target posture;
determining a target face recognition algorithm matched with the current posture of the camera;
and carrying out face recognition through the target face recognition algorithm.
2. The method of claim 1, wherein determining the current pose of the camera from the first pose and the target pose comprises:
comparing the first posture with at least two preset postures, wherein the at least two preset postures correspond to at least two face recognition algorithms one by one, and the at least two preset postures are images of a preset sample of the target posture, which are shot by the camera in at least two different postures;
when a target preset gesture matched with the first gesture exists in the at least two preset gestures, determining the target preset gesture as the current gesture of the camera;
the target face recognition algorithm for determining the current posture of the camera to be matched comprises the following steps:
and determining the algorithm corresponding to the target preset posture as a target face recognition algorithm matched with the current posture of the camera.
3. The method of claim 2, wherein prior to comparing the first pose to at least two predetermined poses, the method further comprises:
shooting preset samples of the target postures in at least two postures through the camera to obtain at least two sample images;
determining the at least two preset postures of the preset sample in the at least two sample images;
and the at least two preset postures correspond to the at least two face recognition algorithms one by one.
4. The method of claim 2 or 3, wherein the predetermined sample is rectangular, and wherein determining the first pose of the predetermined sample in the first image comprises:
performing image recognition on the first image to obtain coordinates of four vertexes of the preset sample in the first image;
and determining the first posture of the preset sample according to the coordinates of the four vertexes.
5. The method of claim 4, wherein determining the first pose of the preset sample according to the coordinates of the four vertices comprises:
determining the length direction of the preset sample according to the coordinates of the four vertexes;
determining the first pose of the preset sample based on the length direction.
6. The method of claim 4, wherein the at least two preset attitudes include a horizontal attitude and a vertical attitude of the preset sample.
7. The method of claim 2 or 3, wherein the determining the first pose of the preset sample in the first image comprises:
performing image recognition on the first image to obtain an image of the preset sample in the first image;
and detecting the posture of the image of the preset sample in the first image to determine the first posture.
8. The method according to claim 2 or 3, wherein the camera is installed on a screen, and after the face recognition by the target face recognition algorithm, the method further comprises:
when face information is detected, acquiring recommendation information according to the face information;
and displaying the recommendation information through the screen.
9. The method of claim 8, wherein the facial information comprises one or more of age, gender, expression, and duration of time a face was detected.
10. The method of claim 1, wherein capturing a first image of a preset sample of target poses with a camera comprises:
after the target instruction is received, a first image of a preset sample of the target posture is shot through the camera.
11. A face recognition apparatus, characterized in that the face recognition apparatus comprises:
the image acquisition module is used for shooting a first image of a preset sample of the target posture through a camera;
a first pose determination module, configured to determine a first pose of the preset sample in the first image;
the second attitude determination module is used for determining the current attitude of the camera according to the first attitude and the target attitude;
the first algorithm determining module is used for determining a target face recognition algorithm matched with the current posture of the camera;
and the face recognition module is used for carrying out face recognition through the target face recognition algorithm.
12. The face recognition apparatus according to claim 11, wherein the face recognition apparatus further comprises:
the posture comparison module is used for comparing the first posture with at least two preset postures, the at least two preset postures correspond to at least two face recognition algorithms one by one, and the at least two preset postures are postures of preset samples in the target posture, which are shot by the camera in at least two different postures;
the third gesture determining module is used for determining a target preset gesture matched with the first gesture as the current gesture of the camera when the target preset gesture exists in the at least two preset gestures;
and the second algorithm determining module is used for determining the algorithm corresponding to the target preset posture as the target face recognition algorithm matched with the current posture of the camera.
13. A face recognition device is characterized in that the device comprises a processor, a camera and a memory;
the camera and the memory are both connected to the processor, and at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the face recognition method according to any one of claims 1 to 11.
14. The face recognition device of claim 13, wherein the device further comprises a screen.
15. A computer storage medium, characterized in that at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the computer storage medium, which is loaded and executed by a processor to implement the face recognition method according to any one of claims 1 to 10.
CN202110604756.7A 2021-05-31 2021-05-31 Face recognition method, device, equipment and computer storage medium Pending CN115482564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110604756.7A CN115482564A (en) 2021-05-31 2021-05-31 Face recognition method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110604756.7A CN115482564A (en) 2021-05-31 2021-05-31 Face recognition method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN115482564A true CN115482564A (en) 2022-12-16

Family

ID=84419343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110604756.7A Pending CN115482564A (en) 2021-05-31 2021-05-31 Face recognition method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN115482564A (en)

Similar Documents

Publication Publication Date Title
CN109829456B (en) Image identification method and device and terminal
CN111541907A (en) Article display method, apparatus, device and storage medium
US20220383511A1 (en) Video data processing method and apparatus, computer device, and storage medium
CN108491748B (en) Graphic code identification and generation method and device and computer readable storage medium
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN113378705B (en) Lane line detection method, device, equipment and storage medium
CN113627413B (en) Data labeling method, image comparison method and device
CN111027490A (en) Face attribute recognition method and device and storage medium
WO2020244592A1 (en) Object pick and place detection system, method and apparatus
CN113723136A (en) Bar code correction method, device, equipment and storage medium
CN110503159B (en) Character recognition method, device, equipment and medium
CN110677713B (en) Video image processing method and device and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN111325220A (en) Image generation method, device, equipment and storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN112257594A (en) Multimedia data display method and device, computer equipment and storage medium
CN111353513B (en) Target crowd screening method, device, terminal and storage medium
CN110163192B (en) Character recognition method, device and readable medium
CN111740969B (en) Method, device, equipment and storage medium for verifying electronic certificate information
CN115482564A (en) Face recognition method, device, equipment and computer storage medium
CN111757146B (en) Method, system and storage medium for video splicing
CN113936240A (en) Method, device and equipment for determining sample image and storage medium
CN112882094A (en) First-arrival wave acquisition method and device, computer equipment and storage medium
CN112990424A (en) Method and device for training neural network model
CN111160248A (en) Method and device for tracking articles, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination