CN115065782B - Scene acquisition method, acquisition device, image pickup equipment and storage medium - Google Patents

Scene acquisition method, acquisition device, image pickup equipment and storage medium Download PDF

Info

Publication number
CN115065782B
CN115065782B CN202210484370.1A CN202210484370A CN115065782B CN 115065782 B CN115065782 B CN 115065782B CN 202210484370 A CN202210484370 A CN 202210484370A CN 115065782 B CN115065782 B CN 115065782B
Authority
CN
China
Prior art keywords
camera
determining
detection object
image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210484370.1A
Other languages
Chinese (zh)
Other versions
CN115065782A (en
Inventor
李春
陈宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Shixi Technology Co Ltd
Original Assignee
Zhuhai Shixi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Shixi Technology Co Ltd filed Critical Zhuhai Shixi Technology Co Ltd
Priority to CN202210484370.1A priority Critical patent/CN115065782B/en
Publication of CN115065782A publication Critical patent/CN115065782A/en
Application granted granted Critical
Publication of CN115065782B publication Critical patent/CN115065782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Abstract

The embodiment of the application discloses a scene acquisition method, an acquisition device, image pickup equipment and a storage medium, which are used for improving the quality of an acquired panoramic image while obtaining a larger field of view range. The method of the embodiment of the application comprises the following steps: initializing parameters of the first camera and the second camera; establishing a three-dimensional coordinate system by taking the middle point of the first camera and the second camera as an origin, determining the position coordinate of a first reference object, and determining an optical axis included angle between the first camera and the second camera according to the position coordinate of the first reference object; determining the inclination angles of the first camera and the second camera; determining the horizontal view field range of the first camera and the second camera according to the inclined angle and the included angle of the optical axis; determining the focal lengths of the first camera and the second camera; respectively acquiring a first partial image shot by a first camera and a second partial image shot by a second camera within preset time; the first partial image and the second partial image are cropped and spliced to generate a panoramic image.

Description

Scene acquisition method, acquisition device, image pickup equipment and storage medium
Technical Field
The embodiment of the application relates to the field of a photographic monitoring system, in particular to a scene acquisition method, an acquisition device, photographic equipment and a storage medium.
Background
The video camera is used as a general electronic product, is widely used in network communication and video chat, plays an important role in auxiliary classroom teaching, is widely used in college classrooms at present, has a main application scene of video monitoring, personnel attendance checking and the like, and is a hotspot for camera application all the time especially for teachers to carry out classroom attendance checking by combining technologies such as face recognition and the like.
However, since the area of the classroom is not small in most middle and primary schools, it is difficult to completely cover the classroom by using the lens with a small angle of view, and the lens with a large angle of view is used, although the classroom can be covered, when an image or a video is shot, the pixel resolution of the details of a remote object in the image or the video is lower than that of the lens with a smaller angle of view, so that facial information is difficult to identify, and meanwhile, the lens with a large angle of view is accompanied by larger distortion, that is, the lens with a large angle of view cannot meet the high resolution of the pixels of the details of the remote object while meeting the large angle of view, so that the quality of the acquired image is affected.
Disclosure of Invention
The embodiment of the application provides a scene acquisition method, an acquisition device, image pickup equipment and a storage medium, which are used for improving the quality of an acquired global scene image while obtaining a larger field of view range.
The present application provides, in a first aspect, a scene acquisition method applied to an image capturing apparatus, the image capturing apparatus including a first camera, a second camera, and a base, the scene acquisition method including:
initializing parameters of the first camera and the second camera, wherein the parameters comprise an optical axis angle, an inclination angle and a focal length;
establishing a three-dimensional coordinate system by taking the middle point of the first camera and the second camera as an origin, determining a first reference object position coordinate, and determining an optical axis included angle between the first camera and the second camera according to the first reference object position coordinate;
determining an inclination angle of the first camera and the second camera;
determining the horizontal view field range of the first camera and the second camera according to the inclined angle and the optical axis included angle;
determining the focal lengths of the first camera and the second camera;
respectively acquiring a first partial image shot by the first camera and a second partial image shot by the second camera in preset time;
And clipping and splicing the first partial image and the second partial image to generate a panoramic image.
Optionally, the determining the horizontal field of view range of the first camera and the second camera according to the inclination angle and the optical axis included angle includes:
determining the identified width of the target scene area according to the extension range of the included angle of the optical axis;
determining a detection object group in the target scene area, and acquiring a first position coordinate of a detection object closest to the image pickup device in the extension range of the inclination angle;
determining a first absolute distance between the first location coordinate and the origin;
and determining the horizontal field of view range of the first camera and the second camera according to the identified width and the first absolute distance.
Optionally, the determining the focal lengths of the first camera and the second camera includes:
acquiring second position coordinates of a detection object farthest from the imaging equipment in the extending range of the inclination angle, and determining a second absolute distance between the detection object and the origin according to the second position coordinates;
determining a detection area size of the farthest detection object and a pixel number of the imaging apparatus in a horizontal direction;
And determining the focal lengths of the first camera and the second camera according to the second absolute distance, the detection area size, the pixel number and the pixel size of the image pickup device.
Optionally, after the cropping and stitching of the first partial image and the second partial image, the scene acquisition method further includes:
and carrying out face recognition on the panoramic image so as to update attendance information of the detection object.
Optionally, after the cropping and stitching of the first partial image and the second partial image, the scene acquisition method further includes:
performing anomaly detection object identification on the panoramic image to obtain an anomaly detection object region;
and carrying out human body gesture recognition on the abnormal detection object area, and generating classroom evaluation of the abnormal detection object according to a human body gesture recognition result.
Optionally, in the three-dimensional coordinate system, an x-axis is disposed transversely along the target scene area, a y-axis is disposed longitudinally along the target scene area, and a z-axis is disposed vertically.
Optionally, the base of camera equipment contains a plurality of horizontal bottom surfaces, is equipped with adjustable contained angle between every two horizontal bottom surfaces, first camera with the second camera is located on the horizontal bottom surface that corresponds respectively symmetry.
The present application provides, in a second aspect, a scene acquisition device applied to an image pickup apparatus including a first camera, a second camera, and a base, including:
the parameter initialization unit is used for initializing parameters of the first camera and the second camera, wherein the parameters comprise an optical axis angle, an inclination angle and a focal length;
the first determining unit is used for establishing a three-dimensional coordinate system by taking the middle point of the first camera and the middle point of the second camera as an origin, determining the position coordinate of a first reference object, and determining the included angle of the optical axis between the first camera and the second camera according to the position coordinate of the first reference object;
a second determining unit configured to determine an inclination angle of the first camera and the second camera;
the third determining unit is used for determining the horizontal view field range of the first camera and the second camera according to the inclined angle and the included angle of the optical axis;
a fourth determining unit, configured to determine focal lengths of the first camera and the second camera;
the first acquisition unit is used for respectively acquiring a first partial image shot by the first camera and a second partial image shot by the second camera in preset time;
And the first generation unit is used for clipping and splicing the first partial image and the second partial image to generate a panoramic image.
Optionally, the third determining unit includes:
the width determining module is used for determining the identified width of the target scene area according to the extension range of the included angle of the optical axis;
a first processing module, configured to determine a detection object group in the target scene area, and acquire a first position coordinate of a detection object closest to the imaging device in an extension range of the tilt angle;
a first distance determination module for determining a first absolute distance between the first location coordinate and the origin;
and the horizontal view angle determining module is used for determining the horizontal view field range of the first camera and the second camera according to the identified width and the first absolute distance.
Optionally, the fourth determining unit includes:
a second distance determining module, configured to obtain second position coordinates of a detection object that is farthest from the imaging apparatus within an extension range of the tilt angle, and determine a second absolute distance from the origin according to the second position coordinates;
A pixel number determination module configured to determine a detection area size of the farthest detection object and a pixel number in a horizontal direction of the image pickup apparatus;
and the focal length determining module is used for determining focal lengths of the first camera and the second camera according to the second absolute distance, the detection area size, the pixel number and the pixel size of the image pickup device.
Optionally, the scene acquisition device further includes:
the attendance information processing unit is used for carrying out face recognition on the panoramic image so as to update attendance information of a detection object;
an abnormal object detection unit, configured to perform abnormal detection object recognition on the panoramic image, so as to obtain an abnormal detection object area;
and the evaluation processing unit is used for carrying out human body gesture recognition on the abnormal detection object area and generating class evaluation of the abnormal detection object according to a human body gesture recognition result.
The present application provides, from a third aspect, a scene acquisition device comprising:
a processor, a memory, an input-output unit, and a bus;
the processor is connected with the memory, the input/output unit and the bus;
the memory holds a program which the processor invokes to perform the scene acquisition method according to the first aspect or any of the steps of the first aspect.
An image pickup apparatus, characterized in that the image pickup apparatus includes a first camera, a second camera, and a base, and is applied to the scene acquisition method according to the first aspect or any of the steps of the first aspect.
A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the scene acquisition method of the first aspect or any of the steps of the first aspect.
From the above technical solutions, the embodiment of the present application has the following advantages:
the application provides a scene acquisition method which is applied to image pickup equipment. Before shooting by the camera equipment, initializing parameters of the cameras, establishing a three-dimensional coordinate system by taking a middle point of the first camera and a middle point of the second camera as an origin, determining a position coordinate of a first reference object, determining an optical axis included angle between the first camera and the second camera according to the position coordinate of the first reference object, determining inclination angles of the two cameras, determining horizontal view angles of the two cameras according to the determined inclination angles and the optical axis included angle, determining a focal length, and then cutting and splicing images shot by the first camera and the second camera in the same time to generate a panoramic image. According to the application, the horizontal view angle of the double shooting is determined according to the included angle of the optical axis and the inclined angle, a larger view field range can be obtained in the horizontal direction, after the horizontal view angle is determined, the focal length of the camera is determined, so that each reference object far or near from the camera equipment in a scene can be clearly displayed, after the parameters of the camera are determined, the panoramic image generated by cutting and splicing the images shot by the first camera and the second camera is further expanded on the basis of ensuring the high resolution of the pixels of the details of the remote object, the problem of dead zone existing in shooting is solved, and the image quality of scene acquisition is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an image pickup apparatus according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a first reference point according to an embodiment of the present application;
fig. 3 is another schematic structural diagram of an image capturing apparatus according to an embodiment of the present application;
FIG. 4 is another schematic diagram of a first reference point provided in an embodiment of the present application
Fig. 5 is a flowchart of an embodiment of a scene acquisition method according to an embodiment of the present application;
fig. 6 is a flowchart of another embodiment of a scene acquisition method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an embodiment of a scene acquisition device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another embodiment of a scene acquisition device according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of an embodiment of a scene acquisition device according to an embodiment of the present application.
Detailed Description
The video camera is used as a general electronic product, is widely used in network communication and video chat, plays an important role in auxiliary classroom teaching, is widely used in college classrooms at present, has video monitoring and classroom attendance as main application scenes, and is a hotspot for camera application all the time, especially, the video camera is used for assisting teachers in carrying out classroom attendance and teaching quality analysis such as judging learning effects according to the expression of students in the teaching process by combining technologies such as face recognition and the like.
However, in optical imaging, large scenes and high resolution are a pair of contradictors. Because of the limitation of the visual field of a camera, the camera equipment comprises a first camera, a second camera and a base, wherein the first camera and the second camera are arranged on the base, a three-dimensional coordinate system is established by taking the middle point of the camera equipment as an origin, the included angle of the optical axes of the first camera and the second camera is determined according to the determined position coordinates of a first reference object, the included angle of the optical axes of the cameras, the inclined angle, the horizontal visual angle and the focal length are sequentially determined, and then the images shot by the first camera and the second camera are subjected to certain processing to generate images, so that the image with a larger visual field range is obtained, the overall resolution is improved, and the overall quality of the acquired images can be further improved.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In this embodiment, the scene acquisition method may be implemented in a terminal, or may be implemented in a server, a system, or the like, which is not specifically limited. For ease of description, embodiments of the present application are described using a system for performing a subject example.
Referring to fig. 5 in conjunction with fig. 1, fig. 2, fig. 3, and fig. 4, an embodiment of the present application provides a scene acquisition method, including:
101. initializing parameters of the first camera and the second camera, wherein the parameters comprise an optical axis angle, an inclination angle and a focal length;
as shown in fig. 1 or fig. 2, the image capturing apparatus used in the embodiment of the present application may have a base, a first camera and a second camera, where the base includes two horizontal bottom surfaces, an adjustable angle a is disposed between the two horizontal bottom surfaces, and the first camera and the second camera are respectively symmetrically disposed on corresponding horizontal surfaces.
In the embodiment of the application, the applied image pickup device is arranged in a horizontal shooting scene with a height of more than 2m, for example, embedded in the upper end of a blackboard in a classroom, and is used for collecting the image information of students or teachers in the classroom for subsequent analysis and processing, and is not suitable for a scene shot vertically downwards, for example, passenger flow statistics of buses and subways.
In order to facilitate the subsequent processing of the shooting angle of the image and the presentation of the effect, certain requirements are required on the related parameters of the cameras, so that the parameters of the dual-camera are required to be initialized, and the parameters include, but are not limited to, the optical axis angle of each camera, the inclination angle of the camera relative to the blackboard, the focal length of the lens and the like.
102. Establishing a three-dimensional coordinate system by taking the middle point of the first camera and the second camera as an origin, determining the position coordinate of a first reference object, and determining an optical axis included angle between the first camera and the second camera according to the position coordinate of the first reference object;
in the established three-dimensional coordinate system, the x-axis is arranged transversely along the target scene area, the y-axis is arranged longitudinally along the target scene area, and the z-axis is arranged vertically.
In the embodiment of the present application, as shown in fig. 2 or fig. 4, in an application scene, the fields of view of the first camera and the second camera are partially overlapped, so that an object located between the first camera and the second camera can be selected as a first reference object as much as possible, the position coordinate of the first reference object can be used as an intersection point of a midpoint of the first camera and a certain side ray extending from the midpoint of the second camera, and meanwhile, another side ray extends from the midpoint of the first camera and the midpoint of the second camera, so that an included angle range formed by two side rays emitted by each camera covers a half area of the application scene as much as possible. The first reference object may be a table and a chair, a student, etc. placed in the scene, which is not particularly limited.
In order to ensure symmetry of the setting positions of the dual cameras, an object with the dual cameras being offset from the middle and as close as possible should be selected as the first reference object a. Specifically, for example, when the middle point of the first camera and the second camera is taken as the origin, the coordinate of a position Yu Shuang of the first reference object a in the middle of the camera is (x, 0, z), one side ray extending from the midpoint of the first camera and the midpoint of the second camera intersects the first reference object a, the other side ray respectively takes the corresponding camera as the starting point and extends to a certain position at the boundary of the application scene, the optical axis is the central line of an included angle formed by rays at two sides, the optical axis angle is the included angle formed by any side ray and the corresponding central line, and the optical axis included angle is the sum of the optical axis angles of the first camera and the second camera.
Wherein b is an optical axis included angle between the first camera and the second camera.
103. Determining the inclination angles of the first camera and the second camera;
in the embodiment of the application, the first camera and the second camera can rotate, so that the shooting range of the cameras can be as far as possible, and the rear row is prevented from having a shooting blind area, thereby covering the rear row in the depth of an application scene such as a classroom, the inclination angles of the first camera and the second camera relative to the blackboard are determined based on the blackboard, and the inclination angles can be adjusted to a certain extent along with the depth of the application scene.
104. Determining the horizontal view field range of the first camera and the second camera according to the inclined angle and the included angle of the optical axis;
in the embodiment of the application, in an application scene, the double-camera is positioned above the blackboard and is higher than the position of a detection object such as a student and a teacher, in order to avoid shooting blind areas in the left row and the right row in the scene, an inclined angle and an included angle of an optical axis of a certain angle are required to be set, and the horizontal view field range of the first camera and the second camera is determined through the set inclined angle and the included angle of the optical axis. The angle of the included angle of the optical axis is the horizontal angle of view of the first camera and the second camera, and the inclination angle is used for enabling the horizontal angle of view to be extended so as to expand a larger field of view range.
For example, when the included angle between the optical axes of the first camera and the second camera is determined, the range that can be shot by the camera device is only the first reference object and the area in front of the first reference object, and when the tilt angles of the first camera and the second camera (the first camera and the second camera rotate upwards by a certain angle relative to the blackboard) are determined and adjusted, the shooting field of view of the camera device is extended, and the shooting range can be enlarged to the area behind the first reference object.
105. Determining the focal lengths of the first camera and the second camera;
in the embodiment of the application, in an application scene, since the double-shot camera is positioned right above the blackboard and higher than the position of a detection object such as a student and a teacher, in order to clearly identify the facial expression or limb motion of the target detection object through the first camera and the second camera, the collected facial information or limb motion information of the target detection object can be controlled to at least meet the resolution of 30×30. Specifically, after the identification resolution of the target detection object is determined, the focal lengths of the first camera and the second camera are determined by combining data such as the distance from the target detection object to the blackboard and the size of the target detection identification area, so as to meet the identification requirement of the target detection object.
Further, the focal length can be adjusted by itself along with the change of the target detection object in the shooting process of the image pickup device, and the focal length meeting the requirement of the last row of detection in an application scene such as a classroom, namely, the focal length comprises two modes of fixed focus and zooming.
If the fixed focus mode is adopted, the working range of the first camera/the second camera needs to be covered by a depth-of-field range, wherein a near depth-of-field calculation formula is shown in a formula (1), and a far depth-of-field calculation formula is shown in a formula (2).
Wherein F is the aperture of the camera, L is the focusing distance, F is the focal length of the camera, and q is the size of the circle of confusion.
Specifically, assuming that the working range of the image capturing apparatus is 2.2m to 9.0m, f=4.05 mm, f=1.8, l=3000 mm, q= 0.00224mm, and the calculated depth of field range is 1.7m to 11.4m, the working requirements of application scenes such as classrooms are satisfied.
106. Respectively acquiring a first partial image shot by a first camera and a second partial image shot by a second camera within preset time;
after the parameters of the first camera and the second camera are adjusted, in order to meet the application requirements of scene edge image acquisition subsequently, a first partial image shot by the first camera and a second partial image shot by the second camera are required to be respectively acquired within the same preset time, and then the two partial images are processed and synthesized. In practical application, when the camera is in an indoor scene such as a classroom, if the first camera is arranged on the left plane of the base and the second camera is arranged on the right plane of the base as shown in fig. 2, the first camera is used for capturing a left half image, and the second camera is used for capturing a right half image; if the first camera is disposed on the left plane of the base and the second camera is disposed on the right plane of the base as shown in fig. 4, the first camera focuses on the right half image and the second camera focuses on the left half image.
107. The first partial image and the second partial image are cropped and spliced to generate a panoramic image.
In the embodiment of the application, because of the limitation of the field of view range of the cameras, the shooting blind areas on two sides of the front row can be unavoidable.
For example, as shown in fig. 2, when the first camera focuses on capturing the left half of the image in the scene, a capturing blind area exists in the front row area of the right half; the second camera is used for shooting the right half part of image, and then shooting blind areas exist in the front row area of the left half part. Therefore, in order to avoid the blind areas on the two sides of the front row, the first partial image shot by the first camera and the second partial image shot by the second camera need to be cut so as to reserve the image part shot by the corresponding camera, and then the reserved image part is spliced so as to generate a new panoramic image without the front row and the rear row shooting blind areas. For example, as shown in fig. 4, when the first camera focuses on shooting the right half image and the second camera focuses on shooting the left half image in the scene, although the problem of shooting blind areas in the front row area is avoided, the shot overlapping areas are more, so that the first reference object determined above can be used as a demarcation point to clip the first half image shot by the first camera and the second half image shot by the second camera so as to reserve the image parts shot by the corresponding cameras, and then splice the reserved image parts so as to generate a new panoramic image without front row and rear row shooting blind areas.
Optionally, the left and right edges of the two images after being cut may generate blank saw teeth, at this time, a boundary recognition algorithm with both precision and efficiency may be adopted to rapidly recognize the saw tooth boundary of the images, and the set of pixel points in the saw tooth boundary is taken for denoising, so that the blank saw teeth can be eliminated.
In the embodiment of the application, the imaging equipment with the double-shooting function is arranged, the optical axis angle and the inclined angle of the double cameras are designed according to a certain included angle, a certain focal length is set in combination with an application scene, the left front and the right front in the application scene area are respectively shot, then the two images are cut and spliced, and the effect of expanding the angle of view in the horizontal direction is obtained, so that the problem of blind area in shooting is solved, and the image quality of scene acquisition is improved.
Referring to fig. 6, an embodiment of the present application provides another embodiment of a scene acquisition method applied to an image capturing apparatus, where the image capturing apparatus includes a first camera, a second camera, and a base, and includes:
201. initializing parameters of the first camera and the second camera, wherein the parameters comprise an optical axis angle, an inclination angle and a focal length;
202. establishing a three-dimensional coordinate system by taking the middle point of the first camera and the second camera as an origin, determining the position coordinate of a first reference object, and determining an optical axis included angle between the first camera and the second camera according to the position coordinate of the first reference object;
203. Determining the inclination angles of the first camera and the second camera;
steps 201 to 203 in this embodiment are similar to steps 101 to 103 in the previous embodiment, and are not repeated here.
204. Determining the identified width of the target scene area according to the extension range of the included angle of the optical axis;
205. determining a detection object group in a target scene area, and acquiring a first position coordinate of a detection object closest to the camera equipment in the extension range of the inclination angle;
206. determining a first absolute distance between the first location coordinate and the origin;
207. determining the horizontal view field range of the first camera and the second camera according to the identified width and the first absolute distance;
in the application scene, the double-shot camera is positioned right above the blackboard and is higher than the position of the detection object such as a student and a teacher, in order to avoid shooting blind areas in the left row and the right row in the scene, a certain angle of inclination and an optical axis included angle are required to be set, the position of the relevant detection object and the identified width of the scene are respectively determined by utilizing the extension range of the angle of inclination and the optical axis included angle, and certain processing is carried out on the determined position and width data, so that the horizontal view field range of the first camera and the second camera is determined.
In practical application, when in a classroom scene, in order to enable the range of the horizontal field of view to cover students in the first row to the last row, the range of the included angle of the optical axis determines that the target scene area is identified as 7000mm in width, and the range of the inclined angle determines that the first position coordinates of the detection object closest to the camera equipment are (2200, y, z), so that the horizontal field angle of the first camera and the second camera can be determined as shown in formula (3).
Where θ1 refers to the horizontal angle of view of the first camera and the second camera, d1 refers to the recognized width 7000mm of the classroom, and s1 refers to the absolute distance of the detection object position from the blackboard (which can be seen as the origin) of 2200mm.
208. Acquiring second position coordinates of a detection object farthest from the imaging equipment in the extending range of the inclination angle, and determining a second absolute distance between the detection object and the origin according to the second position coordinates;
209. determining a detection area size of the farthest detection object and a pixel number in a horizontal direction of the image pickup apparatus;
210. determining focal lengths of the first camera and the second camera according to the second absolute distance, the detection area size, the pixel number and the pixel size of the camera equipment;
In the embodiment of the application, in order to clearly identify the facial expression or limb action of the target detection object through the first camera and the second camera, the focal length of the first camera and the second camera can be determined by combining the data of the second absolute distance from the target detection object to the blackboard, the size of the target detection identification area, the pixel size of the camera equipment and the like.
Specifically, in practical application, when in a classroom scene, the requirement of facial expression recognition of the last student is met, the number of pixels occupied by a face is equal to or greater than b×b, for example, the number of pixels occupied by a face is equal to or greater than 30×30, then, given that the focal length of any camera is f, the second absolute distance from the last student (i.e., the farthest detection object) to the blackboard is d, the size of the detection area is c, the size of the pixel of the image capturing device is p, and the number of pixels in the horizontal direction of the image capturing device is n, then the focal length value f needs to meet the following formula (4).
The focal length value of the required camera can be calculated through the method
211. Respectively acquiring a first partial image shot by a first camera and a second partial image shot by a second camera within preset time;
212. clipping and stitching the first part of image and the second part of image to generate a panoramic image;
Steps 211 to 212 in this embodiment are similar to steps 106 to 107 in the previous embodiment, and are not repeated here.
213. Performing face recognition on the panoramic image to update attendance information of the detection object;
214. performing anomaly detection object identification on the panoramic image to obtain an anomaly detection object region;
215. and carrying out human body gesture recognition on the abnormal detection object area, and generating class evaluation of the abnormal detection object by combining the human body gesture recognition result with attendance information.
In the embodiment of the application, the aim of knowing the state of the detection object can be fulfilled by analyzing and processing the panoramic image. For example, when in a classroom scene, the students and teachers in the presence can be subjected to face recognition, and the identified personal information can be used for updating attendance records of corresponding detection objects; in the embodiment of the application, the method for recognizing the face of the student and the teacher in the presence can refer to the face recognition algorithm in the prior art, preferably, all photos corresponding to the student in the class and personal information related to each photo are stored in advance, the currently recognized detection object is compared with the pre-stored photo, and if the similarity degree of the currently recognized detection object and the pre-stored photo is greater than or equal to a preset value, the detection object can be considered as the student related to the photo. Personal information preferably, but not limited to, includes student name and/or number, etc.
Further alternatively, the panoramic image may be subjected to abnormality detection object recognition, and the abnormality detection object referred to herein includes two cases that are not a human body, and that are a human body but are not sitting. Preferably, the process of performing human body gesture recognition in the anomaly detection object region of the panoramic image includes, but is not limited to: performing human body recognition on the abnormal target area, and if the human body recognition result is human body, further performing human body posture recognition; if the human body identification result is that the human body stands, the identity of the standing human body can be confirmed by combining corresponding attendance information, and if the identity is the student, a class evaluation of corresponding positive answer questions can be generated; if the human body identification result is that the human body lies down, corresponding class evaluation and the like of class and god-wearing can be generated.
In the embodiment of the application, on the premise of solving the problem of blind area existing in shooting and improving the image quality of scene acquisition, the state analysis can be further carried out on the detection object in the acquired image so as to update the related information of the detection object.
Referring to fig. 7, an embodiment of the present application provides an embodiment of a scene acquisition device, which is applied to an image capturing apparatus, where the image capturing apparatus includes a first camera, a second camera, and a base, and includes:
A parameter initializing unit 301, configured to initialize parameters of the first camera and the second camera, where the parameters include an optical axis angle, an inclination angle, and a focal length;
the first determining unit 302 is configured to establish a three-dimensional coordinate system with an intermediate point of the first camera and the second camera as an origin, determine a position coordinate of the first reference object, and determine an optical axis included angle between the first camera and the second camera according to the position coordinate of the first reference object;
a second determining unit 303, configured to determine an inclination angle of the first camera and the second camera;
a third determining unit 304, configured to determine a horizontal field of view range of the first camera and the second camera according to the inclination angle and the included angle of the optical axis;
a fourth determining unit 305, configured to determine focal lengths of the first camera and the second camera;
a first obtaining unit 306, configured to obtain a first partial image captured by the first camera and a second partial image captured by the second camera in a preset time respectively;
a first generating unit 307, configured to crop and stitch the first partial image and the second partial image to generate a panoramic image.
In the embodiment of the present application, firstly, the parameter initializing unit 301 initializes the parameters of the camera on the image capturing apparatus, so as to readjust the parameters later; then, the first determining unit 302 determines the inclination angle of the first camera and the second camera through the second determining unit 303 according to the optical axis included angle of the image capturing device, and the third determining unit 304 determines the horizontal field ranges of the first camera and the second camera so as to obtain a larger field range in the horizontal direction; then, the fourth determining unit 305 determines the focal lengths of the first camera and the second camera, so that the cameras can clearly shoot the rear-row reference object of the scene, namely, the image resolution is improved; finally, the first obtaining unit 306 obtains a first partial image captured by the first camera and a second partial image captured by the second camera, and the first partial image and the second partial image are cut and spliced by the first generating unit 307 to generate a panoramic image. The field of view scope that camera equipment shooted has been expanded, the problem that the shooting exists the blind area has been solved to the image quality of scene collection has been improved.
Referring to fig. 8, an embodiment of the present application provides another embodiment of a scene acquisition device, which is applied to an image capturing apparatus, where the image capturing apparatus includes a first camera, a second camera, and a base, and includes:
a parameter initializing unit 401, configured to initialize parameters of the first camera and the second camera, where the parameters include an optical axis angle, an inclination angle, and a focal length;
a first determining unit 402, configured to establish a three-dimensional coordinate system with an intermediate point of the first camera and the second camera as an origin, determine a first reference object position coordinate, and determine an optical axis included angle between the first camera and the second camera according to the first reference object position coordinate;
a second determining unit 403, configured to determine an inclination angle of the first camera and the second camera;
a third determining unit 404, configured to determine a horizontal field of view range of the first camera and the second camera according to the inclination angle and the included angle of the optical axis;
a fourth determining unit 405, configured to determine focal lengths of the first camera and the second camera;
a first obtaining unit 406, configured to obtain a first partial image captured by the first camera and a second partial image captured by the second camera in a preset time respectively;
a first generating unit 407 configured to crop and stitch the first partial image and the second partial image to generate a panoramic image;
A attendance information processing unit 408 for recognizing the face of the panoramic image to update attendance information of the detection object;
an anomaly object detection unit 409 for performing anomaly detection object recognition on the panoramic image to obtain an anomaly detection object region;
the evaluation processing unit 410 is configured to perform human body gesture recognition on the abnormality detection target area, and combine the human body gesture recognition result with attendance information to generate a class evaluation of the abnormality detection target.
In an embodiment of the present application, the third determining unit 404 may include:
a width determining module 4041, configured to determine an identified width of the target scene area according to the extension range of the optical axis included angle;
a first processing module 4042, configured to determine a group of detection objects in the target scene area, and acquire a first position coordinate of a detection object closest to the image capturing apparatus in an extension range of the tilt angle;
a first distance determination module 4043 for determining a first absolute distance between the first location coordinates and the origin;
the horizontal view angle determining module 4044 is configured to determine a horizontal view range of the first camera and the second camera according to the identified width and the first absolute distance.
In an embodiment of the present application, the fourth determining unit 405 may include:
A second distance determining module 4051, configured to obtain second position coordinates of a detection object farthest from the imaging apparatus within the extension range of the tilt angle, and determine a second absolute distance from the origin according to the second position coordinates;
a pixel number determination module 4052 for determining the detection area size of the farthest detection object and the number of pixels in the horizontal direction of the image pickup apparatus;
the focal length determining module 4053 is configured to determine focal lengths of the first camera and the second camera according to the second absolute distance, the size of the detection area, the number of pixels, and the size of the pixels of the image capturing apparatus.
In the embodiment of the present application, the x-axis of the three-dimensional coordinate system established by the first determining unit 402 is disposed transversely along the target scene area, the y-axis is disposed longitudinally along the target scene area, and the z-axis is disposed vertically.
In the embodiment of the application, the base of the applied image pickup device comprises a plurality of horizontal bottom surfaces, an adjustable included angle is arranged between every two horizontal bottom surfaces, and the first camera and the second camera are respectively and symmetrically arranged on the corresponding horizontal bottom surfaces.
Referring to fig. 9, fig. 9 is a schematic diagram of a scene acquisition device according to an embodiment of the present application, including:
a processor 501, a memory 502, an input-output unit 503, and a bus 504;
The processor 501 is connected to the memory 502, the input/output unit 503, and the bus 504;
the memory 502 holds a program, and the processor 501 calls the program to execute the following method:
initializing parameters of the first camera and the second camera, wherein the parameters comprise an optical axis angle, an inclination angle and a focal length;
establishing a three-dimensional coordinate system by taking the middle point of the first camera and the second camera as an origin, determining the position coordinate of a first reference object, and determining an optical axis included angle between the first camera and the second camera according to the position coordinate of the first reference object;
determining the inclination angles of the first camera and the second camera;
determining the horizontal view field range of the first camera and the second camera according to the inclined angle and the included angle of the optical axis;
determining the focal lengths of the first camera and the second camera;
respectively acquiring a first partial image shot by a first camera and a second partial image shot by a second camera within preset time;
the first partial image and the second partial image are cropped and spliced to generate a panoramic image.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (11)

1. A scene acquisition method, characterized by being applied to an image pickup apparatus including a first camera, a second camera, and a base, the scene acquisition method comprising:
initializing parameters of the first camera and the second camera, wherein the parameters comprise an optical axis angle, an inclination angle and a focal length, and the inclination angle is an inclination angle in the vertical direction;
Establishing a three-dimensional coordinate system by taking the middle point of the first camera and the second camera as an origin, determining a first reference object position coordinate, and determining an optical axis included angle between the first camera and the second camera according to the first reference object position coordinate;
determining an inclination angle of the first camera and the second camera in the vertical direction;
determining the position of a detection object closest to and farthest from the image pickup device according to the included angle between the inclination angle and the optical axis;
determining the horizontal field of view range of the first camera and the second camera according to the distance between the position of the detection object closest to the camera equipment and the origin;
determining a detection area size of a detection object farthest from the image capturing apparatus, a number of pixels in a horizontal direction of the image capturing apparatus, and a pixel size of the image capturing apparatus;
determining focal lengths of the first camera and the second camera according to the position of the detection object farthest from the image pickup device, the detection area size, the pixel number and the pixel size;
respectively acquiring a first partial image shot by the first camera and a second partial image shot by the second camera in preset time;
And clipping and splicing the first partial image and the second partial image to generate a panoramic image.
2. The scene collection method according to claim 1, wherein the determining the position of the detection object closest and farthest from the image capturing apparatus according to the inclination angle and the optical axis angle includes:
determining the identified width of the target scene area according to the extension range of the included angle of the optical axis;
determining a detection object group in the target scene area, and acquiring a first position coordinate of a detection object closest to the imaging equipment and a second position coordinate of a detection object farthest from the imaging equipment in the extension range of the inclination angle;
the determining the horizontal field of view range of the first camera and the second camera according to the distance between the position of the detection object closest to the imaging device and the origin comprises:
determining a first absolute distance between the first location coordinate and the origin;
and determining the horizontal field of view range of the first camera and the second camera according to the identified width and the first absolute distance.
3. The scene collecting method according to claim 2, wherein said determining the focal lengths of the first camera and the second camera based on the position of the detection object farthest from the image capturing apparatus, the detection area size, the pixel count, and the pixel size includes:
Determining a second absolute distance from the origin point according to the second position coordinates;
and determining the focal lengths of the first camera and the second camera according to the second absolute distance, the detection area size, the pixel number and the pixel size.
4. A scene acquisition method according to any one of claims 1 to 3, characterized in that, after the cropping and stitching of the first partial image and the second partial image, the scene acquisition method further comprises:
and carrying out face recognition on the panoramic image so as to update attendance information of the detection object.
5. A scene acquisition method according to any one of claims 1 to 3, characterized in that, after the cropping and stitching of the first partial image and the second partial image, the scene acquisition method further comprises:
performing anomaly detection object identification on the panoramic image to obtain an anomaly detection object region;
and carrying out human body gesture recognition on the abnormal detection object area, and generating classroom evaluation of the abnormal detection object according to a human body gesture recognition result.
6. The scene acquisition method according to claim 2, characterized in that in the three-dimensional coordinate system, an x-axis is arranged laterally along the target scene area, a y-axis is arranged longitudinally along the target scene area, and a z-axis is arranged vertically.
7. The scene collecting method according to claim 1, wherein the base of the camera device comprises a plurality of horizontal bottom surfaces, an adjustable included angle is arranged between every two horizontal bottom surfaces, and the first camera and the second camera are symmetrically arranged on the corresponding horizontal bottom surfaces respectively.
8. A scene acquisition device, characterized in that is applied to camera equipment, camera equipment includes first camera, second camera and base, includes:
the device comprises a parameter initialization unit, a first camera and a second camera, wherein the parameter initialization unit is used for initializing parameters of the first camera and the second camera, the parameters comprise an optical axis angle, an inclined angle and a focal length, and the inclined angle is an inclined angle in the vertical direction;
the first determining unit is used for establishing a three-dimensional coordinate system by taking the middle point of the first camera and the middle point of the second camera as an origin, determining the position coordinate of a first reference object, and determining the included angle of the optical axis between the first camera and the second camera according to the position coordinate of the first reference object;
a second determining unit configured to determine an inclination angle of the first camera and the second camera in a vertical direction;
a third determining unit, configured to determine a position of a detection object closest to and farthest from the image capturing apparatus according to the inclination angle and the optical axis included angle, and determine a horizontal field of view range of the first camera and the second camera according to a distance between the position of the detection object closest to the image capturing apparatus and the origin;
A fourth determination unit configured to determine a detection area size of a detection object that is farthest from the image pickup apparatus, a number of pixels in a horizontal direction of the image pickup apparatus, and a pixel size of the image pickup apparatus; determining focal lengths of the first camera and the second camera according to the position of the detection object farthest from the image pickup device, the detection area size, the pixel number and the pixel size;
the first acquisition unit is used for respectively acquiring a first partial image shot by the first camera and a second partial image shot by the second camera in preset time;
and the first generation unit is used for clipping and splicing the first partial image and the second partial image to generate a panoramic image.
9. A scene acquisition device, the scene acquisition device comprising:
a processor, a memory, an input-output unit, and a bus;
the processor is connected with the memory, the input/output unit and the bus;
the memory holds a program that the processor invokes to perform the scene acquisition method according to any one of claims 1 to 7.
10. An image pickup apparatus including a first camera, a second camera, and a base, applied to the scene acquisition method according to any one of claims 1 to 7.
11. A computer readable storage medium comprising a computer program which, when run on a computer, causes the computer to perform the scene acquisition method of any one of claims 1 to 7.
CN202210484370.1A 2022-04-29 2022-04-29 Scene acquisition method, acquisition device, image pickup equipment and storage medium Active CN115065782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210484370.1A CN115065782B (en) 2022-04-29 2022-04-29 Scene acquisition method, acquisition device, image pickup equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210484370.1A CN115065782B (en) 2022-04-29 2022-04-29 Scene acquisition method, acquisition device, image pickup equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115065782A CN115065782A (en) 2022-09-16
CN115065782B true CN115065782B (en) 2023-09-01

Family

ID=83196588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210484370.1A Active CN115065782B (en) 2022-04-29 2022-04-29 Scene acquisition method, acquisition device, image pickup equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115065782B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129362B (en) * 2023-04-14 2023-06-30 四川三思德科技有限公司 River floating pollutant monitoring method based on coordinate transverse section diagram

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540822A (en) * 2009-04-28 2009-09-23 南京航空航天大学 Device and method for high-resolution large-viewing-field aerial image forming
JP2012093511A (en) * 2010-10-26 2012-05-17 Mathematec Corp Panoramic attachment for 3d camera
CN105933678A (en) * 2016-07-01 2016-09-07 湖南源信光电科技有限公司 Multi-focal length lens linkage imaging device based on multi-target intelligent tracking
CN107578450A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and system for the demarcation of panorama camera rigging error
CN107635135A (en) * 2017-09-20 2018-01-26 歌尔股份有限公司 Double method of testings and test system for taking the photograph relative dip angle before module group assembling
CN213484975U (en) * 2020-11-24 2021-06-18 深圳市云辉牧联科技有限公司 Stock farm stock placement device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540822A (en) * 2009-04-28 2009-09-23 南京航空航天大学 Device and method for high-resolution large-viewing-field aerial image forming
JP2012093511A (en) * 2010-10-26 2012-05-17 Mathematec Corp Panoramic attachment for 3d camera
CN105933678A (en) * 2016-07-01 2016-09-07 湖南源信光电科技有限公司 Multi-focal length lens linkage imaging device based on multi-target intelligent tracking
CN107578450A (en) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 A kind of method and system for the demarcation of panorama camera rigging error
CN107635135A (en) * 2017-09-20 2018-01-26 歌尔股份有限公司 Double method of testings and test system for taking the photograph relative dip angle before module group assembling
CN213484975U (en) * 2020-11-24 2021-06-18 深圳市云辉牧联科技有限公司 Stock farm stock placement device

Also Published As

Publication number Publication date
CN115065782A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US11042994B2 (en) Systems and methods for gaze tracking from arbitrary viewpoints
US20120120202A1 (en) Method for improving 3 dimensional effect and reducing visual fatigue and apparatus enabling the same
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN111144356B (en) Teacher sight following method and device for remote teaching
CN111355884B (en) Monitoring method, device, system, electronic equipment and storage medium
CN106713740B (en) Positioning tracking camera shooting method and system
US20140009503A1 (en) Systems and Methods for Tracking User Postures to Control Display of Panoramas
CN103379267A (en) Three-dimensional space image acquisition system and method
US20230239457A1 (en) System and method for corrected video-see-through for head mounted displays
US11715236B2 (en) Method and system for re-projecting and combining sensor data for visualization
CN115065782B (en) Scene acquisition method, acquisition device, image pickup equipment and storage medium
CN108282650B (en) Naked eye three-dimensional display method, device and system and storage medium
JP2003179800A (en) Device for generating multi-viewpoint image, image processor, method and computer program
CN112470189B (en) Occlusion cancellation for light field systems
CN105488780A (en) Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
Shimizu et al. Surgery Recording without Occlusions by Multi-view Surgical Videos.
WO2016179694A1 (en) Spherical omnipolar imaging
CN111200686A (en) Photographed image synthesizing method, terminal, and computer-readable storage medium
CN111246116B (en) Method for intelligent framing display on screen and mobile terminal
CN110581977B (en) Video image output method and device and three-eye camera
CN115396602A (en) Scene shooting control method, device and system based on three-camera system
CN112261281B (en) Visual field adjusting method, electronic equipment and storage device
CN114022562A (en) Panoramic video stitching method and device capable of keeping integrity of pedestrians
CN113744133A (en) Image splicing method, device and equipment and computer readable storage medium
CN106856558B (en) Send the 3D image monitoring and its monitoring method of function automatically with video camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant