CN112911150A - Automatic snapshot method for high-definition human face in target area - Google Patents

Automatic snapshot method for high-definition human face in target area Download PDF

Info

Publication number
CN112911150A
CN112911150A CN202110123381.2A CN202110123381A CN112911150A CN 112911150 A CN112911150 A CN 112911150A CN 202110123381 A CN202110123381 A CN 202110123381A CN 112911150 A CN112911150 A CN 112911150A
Authority
CN
China
Prior art keywords
point
target area
camera
pan
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110123381.2A
Other languages
Chinese (zh)
Inventor
张林玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Heyi Video Information Technology Co ltd
Original Assignee
Guangzhou Heyi Video Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Heyi Video Information Technology Co ltd filed Critical Guangzhou Heyi Video Information Technology Co ltd
Priority to CN202110123381.2A priority Critical patent/CN112911150A/en
Publication of CN112911150A publication Critical patent/CN112911150A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for automatically capturing a high-definition face in a target area, which comprises the following steps in sequence: establishing a space rectangular coordinate system; establishing a space model through the geometric position, and calculating a shooting angle and a zooming multiple of a shooting target relative to a mechanical camera holder; positioning geometric positions of all positions in a target area relative to a holder camera by meshing a shooting scene; and traversing the grids, realizing the automatic cruise shooting without blind areas, and calculating the position data of the grids in a space coordinate system through the grid numbers for guiding the control of the holder control program. The face snapshot system is used for realizing automatic non-blind-area face photo snapshot, can realize automatic non-sensing image recognition face attendance, and reduces the manual workload.

Description

Automatic snapshot method for high-definition human face in target area
Technical Field
The invention relates to the field of face recognition attendance checking, in particular to a high-definition face automatic snapshot method for a target area.
Background
The management work such as attendance checking, target person finding, safety precaution and sign-in aiming at personnel in a target area is a high-frequency requirement of daily management work, such as maintaining normal teaching discipline and order of a school, ensuring that each work task of the school can be smoothly implemented, carrying out safety screening on the personnel in the area and the like, and the work is a very important key link and is a very important means for strengthening discipline management and restricting daily learning and behaviors of the area. In the prior art, the management work is completed manually, so that the working efficiency is low and mistakes are easy to make.
Therefore, there is a need to provide a new technical solution to solve the above problems to better complete the above management work.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an automatic snapshot method of a high-definition face in a target area.
The purpose of the invention is realized by the following technical scheme:
a method for automatically capturing a high-definition face of a target area comprises the following steps:
s1, establishing a space rectangular coordinate system;
s2, establishing a space model through the geometric position, and calculating the shooting angle and the zoom multiple of the shooting target relative to the mechanical camera holder;
s3, positioning the geometric positions of all the positions in the target area relative to the pan-tilt-zoom camera by meshing the shooting scene; and traversing the grids, realizing the automatic cruise shooting without blind areas, and calculating the position data of the grids in a space coordinate system through the grid numbers for guiding the control of the holder control program.
The step S1 specifically includes:
(1) inputting target area parameters: the length, width and height of the target area are respectively recorded as L, W, H; the height of the camera from the ground is hc, and the distance from the camera to the left side wall of the student in a sitting state is lc;
(2) establishing a space rectangular coordinate system (xyz) by taking the mounting position of the pan-tilt camera as a coordinate origin O (0,0,0) and passing through the point of the pan-tilt camera, taking the wall surface where the camera is mounted as a plane where an x axis and a y axis are located, and passing through the point O and being vertical to the plane XOY; the coordinate value of the point of the pan-tilt camera is a point P (0,0,0) and is superposed with the origin of coordinates O, and the camera is installed upright, so the origin of coordinates of a mechanical pan-tilt of the camera is superposed with the origin of coordinates O, the directions of X, Y, Z axes are also consistent, and the XZ axis of the mechanical pan-tilt is superposed with the XZ axis of a space rectangular coordinate system; thereby establishing a spatial rectangular coordinate system.
In step S2, the establishing a spatial model through the geometric position and calculating a shooting angle of the shooting target with respect to the mechanical camera pan-tilt specifically includes the following steps:
setting parameters of a starting point position point S (X1, Y1, Z1) for cruising in a target area, acquiring a coordinate value of an initial position point S, setting a projection of the point S on a plane XOZ as M, obtaining a coordinate value of the point M as (X1,0, Z1), setting a projection of the point S on a Z axis as B, obtaining a coordinate value of the point B as (0,0, Z1), setting a projection of the point S on a Y axis as C, obtaining a coordinate value of the point C as (0, Y1,0), setting a projection of the point S on an X axis as A, obtaining a coordinate value of the point A as (X4, 0,0), obtaining tan 829 MOB as X4835/Z1 according to a trigonometric function relation, thus calculating the size of a horizontal angle of the mechanical cradle head required to rotate when the camera needs to shoot the point S in front, and calculating the cradle head to be centered, the mechanical pan-tilt requires the size of the vertical angle of downward rotation, namely:
Figure BDA0002922851550000031
in step S2, the establishing a spatial model through the geometric position and calculating a zoom factor of the shooting target with respect to the mechanical pan-tilt of the camera specifically includes the following steps:
setting a reference point F at the farthest position in the Z-axis direction right opposite to a mounting wall of a pan-tilt camera in a target area, namely setting the point F close to the rear wall of the target area, standing at the point F to shoot a reference model, adjusting the zoom factor of a lens of the pan-tilt camera to enable the image quality of the face of the model to be clear, acquiring a Z-axis value zf of a coordinate of the point F, and acquiring the zoom factor F of the camera as the maximum factor fm of the lens of the camera in the target area, wherein the Z-axis value zf is approximately equal to the length L of the target area; the zoom factor of the camera lens during shooting with the coordinate points of different z values in the current scene centered on the front is calculated by taking fm and zf as references, and if any point e (x, y and z) in the scene is set, the zoom factor value fe during shooting of the point e is calculated as fe (fm/zf).
The step S3 specifically includes the following steps:
(1) designing a cruise path algorithm, carrying out cruise snapshot in a target area scene according to the length, the width and the height of a target area, wherein the snapshot area is a plane G parallel to an XOZ plane, the plane G is a plane parallel to the XOZ plane and the ground of the target area and has a height h from the ground of the target area, dividing a plane G in a gridding way in the parallel direction of an X axis and the parallel direction of a Z axis, dividing the plane G into grids according to the distance from a tripod head camera in the direction of the Z axis from near to far, taking the row width and the column width of the grids as the reference of the student seats and the desk placement width, setting the row width to be wl and the column width to be wr, the available target area may be divided into n rows, r columns, cn, L/wl, r W/wr, the number of rows n and the number of columns r are both rounded up, so that the number of rows n, the number of columns r and the total number cn of grids n-r in cruise shooting can be obtained; numbering the divided grids from left to right in sequence;
(2) calculating the position of the center point E of each grid in a coordinate system through numbering, if the coordinate of the center point of the n-th grid is to be calculated, setting the row rn where the n-th grid is located as ln, then:
rn=n/(W/wr),
rn is rounded up, and the meaning of the calculation formula is the number of the lattices divided by the number of the lattices in each line;
ln=n%(W/wr)
ln is left, the calculation formula means how many lattices are numbered modulo each row of the lattices), if the remainder is 0, the last column ln is r;
obtaining values of rn and ln, the horizontal distance between the grid and the XOY plane of the space coordinate system in the plane of the ground and the horizontal distance between the grid and the YOZ plane can be calculated, and the horizontal distance between the grid center point E and the XOY plane is set as S1, the horizontal distance between the grid center point E and the XOZ plane is set as S2, and the horizontal distance between the grid center point E and the XOZ plane is set as S3:
S1=(ln-1)*wl+wl/2;
S2=|(rn-1)*wr+wr/2-lc|;
S3=hc-h;
wherein lc is the installation distance between the pan-tilt camera and the left wall of the target area, and hc is the installation height between the pan-tilt camera and the ground of the target area;
thus, the coordinates of the center point E of each block in the space coordinate system, namely the point E, can be reversely calculated by numbering the grids (S2, S3, S1); therefore, the lens rotation angle and the zoom factor of the pan-tilt camera when the center point of each grid is shot in the front center can be calculated;
(3) the method comprises the steps of starting cruising, enabling a pan-tilt camera to horizontally rotate from a starting point S, enabling grids where the starting point S is located to be starting positions of cruising, starting traversing cruising snapshot on small grids one by one, controlling accurate rotation of a camera pan-tilt through a computer program, shooting pictures of corresponding positions after the pan-tilt rotates in place, storing the pictures, and completing cruising snapshot of a whole target area after grid shooting is finished.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention mainly solves the problem of automatic face-unaware snapshot of personnel in a fixed area, uses a pan-tilt camera, realizes omnibearing and dead-angle-free cruise shooting of an automatic control camera pan-tilt in the area by combining a computer program with an algorithm, acquires all face high-definition images in a target area, compares the acquired face pictures with face library information input in a server to obtain personnel information, and makes a judgment.
2. The invention can reduce the workload of the work related to the personnel confirmation in the target area, such as attendance checking, target person searching and conference roll calling, and realizes imperceptibility and intellectualization to the maximum extent.
Drawings
Fig. 1 is a flowchart of an automatic target area high-definition face snapshot method according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
A method for automatically capturing a high-definition face of a target area comprises the following steps:
1. inputting target area parameters (length, width and height of the target area, recorded as L, W and H), wherein the installation ground clearance of the camera is hc, and the distance from the left side wall of the student in a sitting state is lc.
2. And establishing a space rectangular coordinate system (xyz) by taking the mounting position of the pan-tilt camera as a coordinate origin O (0,0,0) and passing through the point of the pan-tilt camera, taking the wall surface where the camera is mounted as a plane where an x axis and a y axis are located, and passing through the point O and being vertical to the plane XOY. Therefore, the coordinate value of the point where the pan/tilt camera is located is a point P (0,0,0) and coincides with the origin of coordinates O, and since the camera is mounted upright (not upside down), the origin of coordinates of the mechanical pan/tilt of the camera coincides with the origin of coordinates O, and the direction of the X, Y, Z axis coincides, the XZ axis of the mechanical pan/tilt also coincides with the XZ axis of the spatial rectangular coordinate system, as shown in fig. 1.
3. Setting parameters of a starting point position point S (X1, Y1, Z1) for cruising in a target area, acquiring a coordinate value of an initial position point S, setting a projection of the point S on a plane XOZ as M, obtaining a coordinate value of the point M as (X1,0, Z1), setting a projection of the point S on a Z axis as B, obtaining a coordinate value of the point B as (0,0, Z1), setting a projection of the point S on a Y axis as C, obtaining a coordinate value of the point C as (0, Y1,0), setting a projection of the point S on an X axis as A, obtaining a coordinate value of the point A as (X4, 0,0), obtaining tan 829 MOB as X4835/Z1 according to a trigonometric function relation, thus calculating the size of a horizontal angle of the mechanical cradle head required to rotate when the camera needs to shoot the point S in front, and calculating the cradle head to be centered, the vertical angle at which the mechanical head needs to be rotated downwards, i.e. the size
Figure BDA0002922851550000061
Figure BDA0002922851550000062
4. Setting a reference point F at the farthest position in the Z-axis direction right opposite to a mounting wall of a pan-tilt camera in a target area (namely, the point F is close to the rear wall of the target area), standing at the point F to shoot a reference model, adjusting the zoom factor of a lens of the pan-tilt camera to enable the image quality of the face of the model to be clear, acquiring the Z-axis value zf (zf is approximately equal to the length L of the target area) of the coordinate of the point F, acquiring the zoom factor F of the camera as the maximum factor fm of the lens of the camera in the target area, calculating the zoom factor of the lens of the camera when the coordinate point of different Z values in the current scene is shot at the center on the front by taking fm and zf as the reference, and setting any point e (x, y, Z) in the scene, and calculating the lens magnification value fe when the e point is shot, wherein the calculation method of fe is that fe is equal to.
5. And (4) combining the steps 3 and 4 to obtain the rotation angle of the lens of the pan-tilt camera in the horizontal and vertical directions and the zoom multiple of the lens at any point e (x, y, z) in the shooting scene.
6. Then, a cruise path algorithm is designed, cruise snapshot is carried out on the scene of the target area according to the length, the width and the height of the target area, the snapshot area is a plane G parallel to an XOZ plane, the plane G is a plane parallel to the XOZ plane and the ground of the target area and is at a height h from the ground of the target area, dividing a plane G in a gridding way in the parallel direction of an X axis and the parallel direction of a Z axis, dividing the plane G into grids according to the distance from a tripod head camera in the direction of the Z axis from near to far, taking the row width and the column width of the grids as the reference of the student seats and the desk placement width, setting the row width to be wl and the column width to be wr, the available target area is divided into n rows, r columns, cn n L/wl (rounding up), and r W/wr (rounding up), so that n rows, r columns, and cn n r of the cruise shot can be obtained. The divided lattices are numbered sequentially from left to right as in table 1:
TABLE 1
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
The position of the center point E of each lattice in the coordinate system can be calculated by numbering, if the coordinate of the center point of the n-th lattice is to be calculated, the row where the n-th lattice is located is rn, and the column where the n-th lattice is located is ln, then rn is n/(W/wr) (rounding calculation is performed upwards) (the lattice number is divided by how many lattices in each row), ln is n% (W/wr) (the remainder is 0, then the last column is ln r) (the lattice number is modulo how many lattices in each row), if n is 11, in the block table of 5 rows and 4 columns, in the third row and the third column, rn is 11/4 (rounding calculation is performed upwards), rn is 3, ln is 11% 4, ln is 3; obtaining values of rn and ln, the horizontal distance from the lattice to the XOY plane of the space coordinate system in the plane where the lattice is located on the ground and the horizontal distance from the YOZ plane can be calculated, where the horizontal distance from the lattice center point E to the XOY plane is S1, the horizontal distance from the YOZ plane is S2, the horizontal distance from the XOZ plane is S3, S1 ═ ln-1) (+ wl/2, S2 | (rn-1) × wr + wr/2 | (lc is the installation distance of the camera from the left wall of the target area), and S3 ═ hc-h (hc is the installation height of the pan-tilt head from the ground of the target area).
Therefore, the coordinates of the central point E of each block in the space coordinate system can be reversely calculated by numbering the grids. I.e., point E (S2, S3, S1). And (4) calculating the lens rotation angle and the zoom multiple of the pan-tilt camera when the center point of each grid is shot in the front center by combining the calculation modes of the step (3) and the step (4).
7. The method comprises the steps of starting cruising, enabling a pan-tilt camera to horizontally rotate from a starting point S, enabling grids where the starting point S is located to be starting positions of cruising, starting traversing, cruising and capturing small grids one by one, controlling accurate rotation of a camera pan-tilt through a computer program, shooting pictures at corresponding positions after the pan-tilt rotates in place, storing the pictures, and completing cruising and capturing of the whole target area after grid shooting is finished.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (5)

1. A method for automatically capturing a high-definition face of a target area is characterized by comprising the following steps in sequence:
s1, establishing a space rectangular coordinate system;
s2, establishing a space model through the geometric position, and calculating the shooting angle and the zoom multiple of the shooting target relative to the mechanical camera holder;
s3, positioning the geometric positions of all the positions in the target area relative to the pan-tilt-zoom camera by meshing the shooting scene; and traversing the grids, realizing the automatic cruise shooting without blind areas, and calculating the position data of the grids in a space coordinate system through the grid numbers for guiding the control of the holder control program.
2. The method for automatically capturing the high-definition human face in the target area according to claim 1, wherein the step S1 specifically comprises:
(1) inputting target area parameters: the length, width and height of the target area are respectively recorded as L, W, H; the height of the camera from the ground is hc, and the distance from the camera to the left side wall of the student in a sitting state is lc;
(2) establishing a space rectangular coordinate system (xyz) by taking the mounting position of the pan-tilt camera as a coordinate origin O (0,0,0) and passing through the point of the pan-tilt camera, taking the wall surface where the camera is mounted as a plane where an x axis and a y axis are located, and passing through the point O and being vertical to the plane XOY; the coordinate value of the point of the pan-tilt camera is a point P (0,0,0) and is superposed with the origin of coordinates O, and the camera is installed upright, so the origin of coordinates of a mechanical pan-tilt of the camera is superposed with the origin of coordinates O, the directions of X, Y, Z axes are also consistent, and the XZ axis of the mechanical pan-tilt is superposed with the XZ axis of a space rectangular coordinate system; thereby establishing a spatial rectangular coordinate system.
3. The method for automatically capturing the high-definition human face in the target area according to claim 2, wherein in step S2, the method for calculating the shooting angle of the shooting target relative to the mechanical holder of the camera by establishing the spatial model through the geometric position specifically comprises the following steps:
setting parameters of a starting point position point S (X1, Y1, Z1) for cruising in a target area, acquiring a coordinate value of an initial position point S, setting a projection of the point S on a plane XOZ as M, obtaining a coordinate value of the point M as (X1,0, Z1), setting a projection of the point S on a Z axis as B, obtaining a coordinate value of the point B as (0,0, Z1), setting a projection of the point S on a Y axis as C, obtaining a coordinate value of the point C as (0, Y1,0), setting a projection of the point S on an X axis as A, obtaining a coordinate value of the point A as (X4, 0,0), obtaining tan 829 MOB as X4835/Z1 according to a trigonometric function relation, thus calculating the size of a horizontal angle of the mechanical cradle head required to rotate when the camera needs to shoot the point S in front, and calculating the cradle head to be centered, the mechanical pan-tilt requires the size of the vertical angle of downward rotation, namely:
Figure FDA0002922851540000021
4. the method for automatically capturing the high-definition human face in the target area according to claim 2, wherein in step S2, the method for calculating the zoom factor of the captured target relative to the mechanical pan-tilt of the camera by building a space model through the geometric position specifically comprises the following steps:
setting a reference point F at the farthest position in the Z-axis direction right opposite to a mounting wall of a pan-tilt camera in a target area, namely setting the point F close to the rear wall of the target area, standing at the point F to shoot a reference model, adjusting the zoom factor of a lens of the pan-tilt camera to enable the image quality of the face of the model to be clear, acquiring a Z-axis value zf of a coordinate of the point F, and acquiring the zoom factor F of the camera as the maximum factor fm of the lens of the camera in the target area, wherein the Z-axis value zf is approximately equal to the length L of the target area; the zoom factor of the camera lens during shooting with the coordinate points of different z values in the current scene centered on the front is calculated by taking fm and zf as references, and if any point e (x, y and z) in the scene is set, the zoom factor value fe during shooting of the point e is calculated as fe (fm/zf).
5. The method for automatically capturing the high-definition human face in the target area according to claim 2, wherein the step S3 specifically comprises the following steps:
(1) designing a cruise path algorithm, carrying out cruise snapshot in a target area scene according to the length, the width and the height of a target area, wherein the snapshot area is a plane G parallel to an XOZ plane, the plane G is a plane parallel to the XOZ plane and the ground of the target area and has a height h from the ground of the target area, dividing a plane G in a gridding way in the parallel direction of an X axis and the parallel direction of a Z axis, dividing the plane G into grids according to the distance from a tripod head camera in the direction of the Z axis from near to far, taking the row width and the column width of the grids as the reference of the student seats and the desk placement width, setting the row width to be wl and the column width to be wr, the available target area may be divided into n rows, r columns, cn, L/wl, r W/wr, the number of rows n and the number of columns r are both rounded up, so that the number of rows n, the number of columns r and the total number cn of grids n-r in cruise shooting can be obtained; numbering the divided grids from left to right in sequence;
(2) calculating the position of the center point E of each grid in a coordinate system through numbering, if the coordinate of the center point of the n-th grid is to be calculated, setting the row rn where the n-th grid is located as ln, then:
rn=n/(W/wr),
rn is rounded up, and the meaning of the calculation formula is the number of the lattices divided by the number of the lattices in each line;
ln=n%(W/wr)
ln is left, the calculation formula means how many lattices are numbered modulo each row of the lattices), if the remainder is 0, the last column ln is r;
obtaining values of rn and ln, the horizontal distance between the grid and the XOY plane of the space coordinate system in the plane of the ground and the horizontal distance between the grid and the YOZ plane can be calculated, and the horizontal distance between the grid center point E and the XOY plane is set as S1, the horizontal distance between the grid center point E and the XOZ plane is set as S2, and the horizontal distance between the grid center point E and the XOZ plane is set as S3:
S1=(ln-1)*wl+wl/2;
S2=|(rn-1)*wr+wr/2-lc|;
S3=hc-h;
wherein lc is the installation distance between the pan-tilt camera and the left wall of the target area, and hc is the installation height between the pan-tilt camera and the ground of the target area;
thus, the coordinates of the center point E of each block in the space coordinate system, namely the point E, can be reversely calculated by numbering the grids (S2, S3, S1); therefore, the lens rotation angle and the zoom factor of the pan-tilt camera when the center point of each grid is shot in the front center can be calculated;
(3) the method comprises the steps of starting cruising, enabling a pan-tilt camera to horizontally rotate from a starting point S, enabling grids where the starting point S is located to be starting positions of cruising, starting traversing cruising snapshot on small grids one by one, controlling accurate rotation of a camera pan-tilt through a computer program, shooting pictures of corresponding positions after the pan-tilt rotates in place, storing the pictures, and completing cruising snapshot of a whole target area after grid shooting is finished.
CN202110123381.2A 2021-01-29 2021-01-29 Automatic snapshot method for high-definition human face in target area Pending CN112911150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110123381.2A CN112911150A (en) 2021-01-29 2021-01-29 Automatic snapshot method for high-definition human face in target area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110123381.2A CN112911150A (en) 2021-01-29 2021-01-29 Automatic snapshot method for high-definition human face in target area

Publications (1)

Publication Number Publication Date
CN112911150A true CN112911150A (en) 2021-06-04

Family

ID=76120759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110123381.2A Pending CN112911150A (en) 2021-01-29 2021-01-29 Automatic snapshot method for high-definition human face in target area

Country Status (1)

Country Link
CN (1) CN112911150A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345011A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Target object position determining method and device, electronic equipment and storage medium
CN113810662A (en) * 2021-09-13 2021-12-17 杭州米越科技有限公司 Linkage snapshot device and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110304745A1 (en) * 2010-06-10 2011-12-15 Microsoft Corporation Light transport reconstruction from sparsely captured images
CN103093654A (en) * 2013-01-28 2013-05-08 北京航空航天大学 Double video camera interactive intelligent tracking teaching system
JP2015119456A (en) * 2013-12-20 2015-06-25 富士フイルム株式会社 Imaging module and imaging apparatus
CN108833782A (en) * 2018-06-20 2018-11-16 广州长鹏光电科技有限公司 A kind of positioning device and method based on video auto-tracking shooting
CN110647842A (en) * 2019-09-20 2020-01-03 重庆大学 Double-camera classroom inspection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110304745A1 (en) * 2010-06-10 2011-12-15 Microsoft Corporation Light transport reconstruction from sparsely captured images
CN103093654A (en) * 2013-01-28 2013-05-08 北京航空航天大学 Double video camera interactive intelligent tracking teaching system
JP2015119456A (en) * 2013-12-20 2015-06-25 富士フイルム株式会社 Imaging module and imaging apparatus
CN108833782A (en) * 2018-06-20 2018-11-16 广州长鹏光电科技有限公司 A kind of positioning device and method based on video auto-tracking shooting
CN110647842A (en) * 2019-09-20 2020-01-03 重庆大学 Double-camera classroom inspection method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345011A (en) * 2021-06-25 2021-09-03 北京市商汤科技开发有限公司 Target object position determining method and device, electronic equipment and storage medium
CN113345011B (en) * 2021-06-25 2023-08-22 北京市商汤科技开发有限公司 Target object position determining method and device, electronic equipment and storage medium
CN113810662A (en) * 2021-09-13 2021-12-17 杭州米越科技有限公司 Linkage snapshot device and method

Similar Documents

Publication Publication Date Title
US10122997B1 (en) Automated matrix photo framing using range camera input
CN109087244B (en) Panoramic image splicing method, intelligent terminal and storage medium
CN101963751B (en) Device and method for acquiring high-resolution full-scene image in high dynamic range in real time
CN112911150A (en) Automatic snapshot method for high-definition human face in target area
CN111355884B (en) Monitoring method, device, system, electronic equipment and storage medium
CN107665483B (en) Calibration-free convenient monocular head fisheye image distortion correction method
US20150304545A1 (en) Method and Electronic Device for Implementing Refocusing
CN105072314A (en) Virtual studio implementation method capable of automatically tracking objects
CN109146781A (en) Method for correcting image and device, electronic equipment in laser cutting
US9451179B2 (en) Automatic image alignment in video conferencing
CN111445537B (en) Calibration method and system of camera
CN107527336A (en) Relative position of lens scaling method and device
CN110545378A (en) intelligent recognition shooting system and method for multi-person scene
CN105469412A (en) Calibration method of assembly error of PTZ camera
CN104883506A (en) Self-service shooting method based on face identification technology
US20230025058A1 (en) Image rectification method and device, and electronic system
CN111189415A (en) Multifunctional three-dimensional measurement reconstruction system and method based on line structured light
CN110636275A (en) Immersive projection system and method
CN104836953B (en) Multi-projector screen characteristics point automatic camera and denoising recognition methods
CN107977998B (en) Light field correction splicing device and method based on multi-view sampling
CN110675482A (en) Spherical Fibonacci pixel dot matrix panoramic picture rendering and displaying method for virtual three-dimensional scene
CN103546680B (en) A kind of deformation-free omni-directional fisheye photographic device and a method for implementing the same
CN113329181B (en) Angle switching method, device, equipment and storage medium of camera
JP7397241B2 (en) Image stitching method, computer readable storage medium and computing device
CN111325790A (en) Target tracking method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210604