CN115550551A - Automatic focusing method and device for shooting equipment, electronic equipment and storage medium - Google Patents

Automatic focusing method and device for shooting equipment, electronic equipment and storage medium Download PDF

Info

Publication number
CN115550551A
CN115550551A CN202211200070.2A CN202211200070A CN115550551A CN 115550551 A CN115550551 A CN 115550551A CN 202211200070 A CN202211200070 A CN 202211200070A CN 115550551 A CN115550551 A CN 115550551A
Authority
CN
China
Prior art keywords
focusing
video frame
area
current video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211200070.2A
Other languages
Chinese (zh)
Inventor
张伟俊
马龙祥
侯俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202211200070.2A priority Critical patent/CN115550551A/en
Publication of CN115550551A publication Critical patent/CN115550551A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application relates to an automatic focusing method and device for shooting equipment, electronic equipment and a storage medium. The method comprises the following steps: carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame; determining a face detection area in the current video frame according to the area position and the first size; carrying out face detection on the face detection area to obtain a detection result; selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for acquiring the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different. By adopting the method, the focusing accuracy can be improved, the stable and continuous automatic focusing tracking is realized, the imaging effect is stable, and the image quality is clear.

Description

Automatic focusing method and device for shooting equipment, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image capturing technologies, and in particular, to an auto-focusing method and apparatus for a capturing device, an electronic device, and a storage medium.
Background
With the development of image shooting technology, electronic devices go deep into our lives, and more people use various shooting devices to take pictures, record videos and live broadcasts so as to record lives. At present, most shooting equipment on the market generally has an automatic focusing function. The automatic focusing function is generally adopted in an application scene with human beings, and the technical scheme is that a 'human face priority focusing' mode is used, namely when a human face is detected in a picture, the human face area is focused preferentially, so that the image quality of an imaging picture is clear, and a main body is clear.
However, when a person turns around, the above method is prone to lose a focusing target because the person cannot be detected, so that the camera is prone to be inaccurate in focusing, and further a shot picture is poor.
Disclosure of Invention
In view of the above, it is desirable to provide an auto-focusing method and apparatus for an image capturing device, an electronic device, and a storage medium, which can improve focusing accuracy.
In a first aspect, the present application provides an auto-focusing method for a photographing apparatus. The method comprises the following steps:
carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame;
determining a face detection area in the current video frame according to the area position and the first size;
carrying out face detection on the face detection area to obtain a detection result;
selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for collecting the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
In some of these embodiments, the current video frame includes multiple people; before the target tracking of the preset target person, the method further comprises:
determining the center position of a first video frame;
detecting a coverage area of a tracking target corresponding to each person in the first video frame;
calculating a center point of the coverage area;
calculating the distance between the central position and the central point corresponding to each tracking target;
and marking the person corresponding to the minimum distance in the plurality of distances as the preset target person.
In some of these embodiments, the current video frame includes a plurality of people; before the target tracking of the preset target person, the method further comprises:
detecting a coverage area of a tracking target corresponding to each person in a first video frame; marking the figure corresponding to the coverage area with the largest first size in each coverage area as the preset target figure; or,
in response to an input target selection instruction, marking the person specified by the target selection instruction in the first video frame as the preset target person.
In some embodiments, the focusing parameter is a parameter for representing focusing with a head feature, a facial feature, or a human eye feature;
the selecting corresponding focusing parameters according to the detection result and focusing the shooting equipment for collecting the current video frame based on the focusing parameters comprises the following steps:
and selecting focusing parameters for focusing by utilizing head features, facial features or human eye features based on the detection result, and focusing the shooting equipment for collecting the current video frame according to the head features, the facial features or the human eye features.
In some embodiments, said focusing the photographing device that acquired the current video frame in accordance with the head feature, the facial feature, or the eye feature comprises:
when the face detection region does not contain the facial features of the preset target person, focusing the shooting equipment for collecting the current video frame according to the head features;
when the face detection region contains the facial features of the preset target person, detecting whether the facial features comprise human eye features; if not, focusing the shooting equipment for collecting the current video frame according to the facial features; and if so, focusing the shooting equipment for acquiring the current video frame according to the human eye features in the facial features.
In some embodiments, the method further comprises:
and when the preset target person is not located at the preset position of the current video frame, adjusting the shooting equipment according to the preset target person so as to enable the preset target person to be located at the preset position of the current video frame.
In some embodiments, said determining a face detection region in said current video frame based on said region location and a first size comprises:
when the tracking target is the head or the head-shoulder part of the preset target person, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area in the current video frame;
when the tracking target is a human body of the preset target person, selecting a target area according to the area position in the current video frame, and taking the target area as a face detection area; wherein the first size is larger than a second size of the target area.
In a second aspect, the application further provides an automatic focusing device of the shooting device. The device comprises:
the target tracking module is used for carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame;
a face detection area determining module, configured to determine a face detection area in the current video frame according to the area position and the first size;
the face detection module is used for carrying out face detection on the face detection area to obtain a detection result;
the focusing module is used for selecting corresponding focusing parameters according to the detection result and focusing the shooting equipment for acquiring the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
In a third aspect, the present application further provides an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the automatic focusing method of the shooting device in the first aspect embodiment when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium has stored thereon a computer program that, when executed by a processor, implements the auto-focusing method of the photographing apparatus in the embodiment of the first aspect.
In a fifth aspect, the present application further provides a computer program product. The computer program product includes a computer program, and the computer program realizes the automatic focusing method of the shooting device in the first aspect when executed by the processor.
According to the automatic focusing method, the automatic focusing device, the electronic equipment and the storage medium of the shooting equipment, the preset target person is subjected to target tracking to obtain the area position and the first size of the tracking target corresponding to the preset target person in the current video frame, then the human face detection area is determined in the current video frame according to the area position and the first size, then the human face detection area is subjected to human face detection to obtain the detection result, finally different focusing parameters are selected according to the different detection results, and the shooting equipment for collecting the current video frame is focused based on the selected focusing parameters. Therefore, when the shooting equipment shoots, the focusing target cannot be lost, the focusing accuracy is improved, stable and continuous automatic focusing tracking is realized, the imaging effect is stable, and the image quality is clear.
Drawings
FIG. 1 is a flow chart of an auto-focusing method of a photographing apparatus according to some embodiments;
FIG. 2 is a flowchart illustrating an auto-focusing method of a camera in another embodiment;
FIG. 3 is a flowchart illustrating an auto-focusing method of a camera in another embodiment;
FIG. 4 is a schematic flow chart of the face detection region determination step in some embodiments;
FIG. 5 is a flow chart of an auto-focusing method of a photographing apparatus according to another embodiment;
FIG. 6 is a block diagram of an auto-focusing device of the photographing apparatus in some embodiments;
FIG. 7 is a diagram of the internal structure of an electronic device in some embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In some embodiments, as shown in fig. 1, an auto-focusing method of a photographing apparatus is provided, and the present embodiment is exemplified by applying the method to the photographing apparatus. The shooting equipment comprises electronic equipment capable of shooting, such as a camera, a smart phone, a tablet personal computer and a personal computer.
In this embodiment, the auto-focusing method of the photographing apparatus includes the following steps:
step 102, performing target tracking on a preset target person to obtain a region position and a first size of a tracking target corresponding to the preset target person in a current video frame.
The tracking target may refer to a tracking portion when a preset target person is subjected to target tracking, and the tracking target includes any one of a human body, a head, or a head-shoulder portion of the preset target person. The region position may refer to a position of a region where the tracking target is located in the current video frame. The first size may refer to the size of the area in the current video frame where the tracking target is located.
In some embodiments, since the preset target person is in the process of turning around, the shooting device collects the side face and the front face of the preset target person, however, the characteristics of the side face and the front face are different, and when the preset target person turns around to face the shooting device from the back, the face of the preset target person disappears.
In some embodiments, when performing target tracking on a preset target person, a circumscribed rectangle is used to frame the tracking target, and the framed area is used as an area of the tracking target in the current video frame (i.e., an area where the tracking target is located). For example, when performing target tracking with the head of a preset target person, the head is framed with a circumscribed rectangle frame, and the framed area is taken as the area of the head in the current video frame.
In some embodiments, a preset target tracking algorithm is used for performing target tracking on a preset target person to obtain a region position and a first size of a tracking target corresponding to the preset target person in a current video frame.
In some embodiments, target tracking may be performed on a preset target person by using a DCF (discriminant Correlation Filter) or other filtering trackers, or by using a SiamRPN (Siamese Region recommendation Network) or other trackers based on a CNN (Convolutional Neural Network) technology, or by using other trackers, and the application is not limited in particular.
In some embodiments, before step 102, the shooting device may further detect the tracking target of each person, specifically: and responding to the target tracking instruction, and detecting the second video frame to obtain a tracking target. For example, the user issues an OK gesture towards the shooting device, thereby triggering the shooting device to detect the second video frame captured in real time, thereby detecting the head, the head-shoulder area, or the human body of the person.
Specifically, the target tracking instruction may refer to an instruction for instructing the shooting device to initiate target tracking on the person, for example, after the user issues an OK gesture, the shooting device generates the target tracking instruction.
In some embodiments, the target tracking instruction may be generated by the shooting device according to a specific button triggered by a user, or generated by the shooting device according to a gesture or a human body posture of the user, or generated by the shooting device according to an instruction issued by the user to the shooting device through a control device (e.g., a remote control device), or generated automatically after the shooting device is turned on for a period of time (e.g., 10 s), or generated by the shooting device detecting that a person enters a specific position (e.g., a central position) of a shooting picture, which is not limited in this application.
In some embodiments, the second video frame is captured by the capture device in real-time, and the second video frame is captured prior to the current video frame.
The method of detecting the second video frame may include: common detection methods, detection methods based on manual features, or detection methods based on convolutional neural network techniques. The manual feature-based detection method includes, but is not limited to, a template matching method, a key point matching method, and a key feature method. Detection methods based on Convolutional Neural network technology include, but are not limited to, YOLO (You see the Detector Only Once), SSD (Single Shot Multi Box Detector), R-CNN (Region-based Convolutional Neural network), mask R-CNN (Mask Region-based Convolutional Neural network).
And step 104, determining a face detection area in the current video frame according to the area position and the first size.
The face detection region may refer to a region where a head or a shoulder of a preset target person is located.
In some embodiments, a face detection region is determined from the tracking target in the current video frame according to the region position and the first size obtained in the previous step. For example, when the tracking target is a head, an area corresponding to a circumscribed rectangular frame of the head may be used as the face detection area. For example, when the tracking target is a head-shoulder portion, an area corresponding to a circumscribed rectangular frame of the head-shoulder portion may be used as the face detection area. For another example, when the tracking target is a human body, a part of the human body, which is one third or one fourth above the circumscribed rectangular frame, may be selected as the face detection area.
And 106, carrying out face detection on the face detection area to obtain a detection result.
The detection result is used to indicate whether a face of a person (i.e., a human face) exists in the human face detection region, that is, the detection result is used to indicate whether a facial feature of a preset target person exists in the human face detection region.
In some embodiments, a preset face detection algorithm is used to perform face detection on the face detection area, so as to obtain a detection result.
In some embodiments, the face detection algorithm may adopt a common detection method, may adopt a detection method based on manual features, and may adopt a detection method based on a convolutional neural network technology. The manual feature-based detection method includes, but is not limited to, a template matching method, a key point matching method, and a key feature method. Detection methods based on the convolutional neural network technology include but are not limited to YOLO, SSD, R-CNN and Mask R-CNN.
It should be noted that, when performing face detection on a face detection area, if a plurality of faces are detected, the largest face is selected from the plurality of faces as a detection result; or selecting the face closest to the center position of the current video frame from the plurality of faces as a detection result; or selecting the face with the highest confidence coefficient from the multiple faces as the detection result. The confidence may be output by a detector corresponding to the foregoing detection method.
Step 108, selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for collecting the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
The focusing parameters may refer to parameters for focusing by the shooting device with the selected specific features. For example, the focusing parameter may be a parameter for indicating focusing with a head feature, a face feature, or a human eye feature.
The detection result may be a first detection result indicating that the face detection region has facial features or a second detection result indicating that the face detection region does not have facial features. Since the preset target person may turn around, when the face detection is performed on the face detection area, the face feature may be detected or may not be detected, and then when the detection results are different, the selected focusing parameters are different.
In some embodiments, the focusing parameters corresponding to the detection result are selected according to the detection result, and the shooting device is focused based on the focusing parameters, so that the shooting device does not lose a focusing target, and the focusing accuracy is improved.
In some embodiments, after the detection result is determined, a contrast focusing method or a phase focusing method may be adopted to focus the photographing apparatus.
The Contrast focusing may refer to Contrast Detection Auto Focusing (CDAF), and a lens position with the largest Contrast in a focusing area is found as a focusing point by moving the lens forward and backward for searching. For example, when focusing is performed on a face region, the contrast is calculated according to the pixels of the face region, the moving direction of the lens of the shooting device is determined according to the calculated contrast, and when the position with the maximum contrast is found in repeated movement, the focusing of the lens of the shooting device is completed. When searching, a common searching method is a hill-climbing searching algorithm.
The Phase focusing may refer to Phase Detection Auto Focusing (PDAF), which reserves PDAF pixel points on the photosensitive element for Phase Detection, calculates a focusing offset value according to a Phase difference, and then rapidly moves the lens to a target position according to the offset value, thereby achieving accurate focusing. Compared with contrast focusing, the phase focusing does not need repeated movement of a lens of shooting equipment, and the focusing speed is high. Illustratively, when a face area is focused, a phase difference is calculated by acquiring PDAF pixel points corresponding to the face area, a focusing offset value is determined according to the phase difference, and then a lens of a shooting device is moved according to the focusing offset value to realize focusing.
It should be noted that the focusing speed of phase focusing is faster than that of contrast focusing, and the focusing success rate of contrast focusing is higher than that of phase focusing. The specific focusing manner is not particularly limited, and a suitable focusing manner can be selected according to actual conditions.
According to the automatic focusing method of the shooting equipment, the preset target person is subjected to target tracking, the area position and the first size of the tracking target corresponding to the preset target person in the current video frame are obtained, then the face detection area is determined in the current video frame according to the area position and the first size, then the face detection area is subjected to face detection, the detection result is obtained, finally different focusing parameters are selected according to the different detection results, and the shooting equipment for collecting the current video frame is focused based on the selected focusing parameters. Therefore, when the shooting equipment shoots, the focusing target cannot be lost, the focusing accuracy is improved, stable and continuous automatic focusing tracking is realized, the imaging effect is stable, the image quality is clear, the main body is clear, and the experience of a user is improved.
As shown in fig. 2, in some embodiments, the current video frame includes a plurality of people, and before step 102, the auto-focusing method of the photographing apparatus further includes:
step 202, determining a center position of the first video frame.
Wherein the first video frame is captured by the capture device and is captured prior to the current video frame. The central position may refer to a center of a picture corresponding to a first video frame acquired by the photographing device.
For example, the center position may be a body center of the first video frame. For example, the picture corresponding to the first video frame is placed in a cartesian coordinate system, and after the coordinates of the picture boundary are determined, the center position of the first video frame is determined according to the coordinates of the picture boundary.
Step 204, detecting the coverage area of the tracking target corresponding to each person in the first video frame.
The coverage area may refer to an area occupied by a circumscribed rectangle of the tracking target in the first video frame. For example, when the tracking target is a head, then the overlay area may refer to an area that is overlaid by a bounding rectangle of the head in the first video frame.
And detecting the area covered by the circumscribed rectangular frame of the tracking target of each person in the first video frame to obtain the coverage area of the tracking target corresponding to each person.
Step 206, calculating the center point of the coverage area.
The central point of the coverage area may refer to a central position of a circumscribed rectangular frame of the tracking target.
For example, when the center point of the coverage area is the center position of the circumscribed rectangle of the tracking target, the center point of the coverage area may be determined according to the four corners of the circumscribed rectangle.
And step 208, calculating the distance between the central position and the central point corresponding to each tracking target.
The distance between the center position and the center point corresponding to each tracking target may be an euclidean distance, a manhattan distance, or another distance, and the present application is not limited specifically. For example, when the euclidean distance is used, the distance between the center position and the center point corresponding to the tracking target is calculated by using a calculation formula of the euclidean distance.
Step 210, marking the person corresponding to the minimum distance in the plurality of distances as a preset target person.
In some embodiments, the person corresponding to the minimum distance among the distances calculated in step 208 is selected as the preset target person for target tracking.
In some embodiments, before step 102, the current video frame includes a plurality of people, and the auto-focusing method of the photographing apparatus further includes: detecting a coverage area of a tracking target corresponding to each person in a first video frame; marking the figure corresponding to the coverage area with the largest first size in each coverage area as a preset target figure; or in response to the input target selection instruction, marking the person specified by the target selection instruction in the first video frame as a preset target person.
Specifically, in the present embodiment, the coverage area may refer to an area covered by the tracking target in the first video frame. After the coverage area of the tracking target corresponding to each person in the first video frame is detected, the person corresponding to the coverage area with the largest first size in each coverage area is marked as a preset target person for target tracking.
In some embodiments, the first video frame is displayed on a shooting device, and a user inputs a target selection instruction on the shooting device, so that the shooting device takes a person specified by the target selection instruction input by the user as a preset target person and then performs target tracking on the preset target person.
In some embodiments, the focusing parameter is a parameter for indicating focusing with a head feature, a face feature, or a human eye feature. Step 108 includes, but is not limited to, the following steps: and selecting focusing parameters for focusing by utilizing the head features, the face features or the human eye features based on the detection result, and focusing the shooting equipment for collecting the current video frame according to the head features, the face features or the human eye features.
Specifically, in the present embodiment, one feature is selected as a focusing parameter from a head feature, a face feature, or a human eye feature according to a difference in detection results, and focusing is performed on a shooting device that captures a current video frame based on the selected focusing parameter.
As shown in fig. 3, in some embodiments, the step of "focusing the capturing device that captures the current video frame in terms of head features, facial features, or human eye features" includes, but is not limited to, the steps of:
step 302, focusing the shooting device for collecting the current video frame according to the head feature when the face detection region does not contain the face feature of the preset target person.
In some embodiments, when the detection result indicates that the face detection area does not include the facial features of the preset target person, it indicates that the preset target person is facing the shooting device with the back side in the current video frame, in this case, the shooting device that captures the current video frame is focused with the head features, that is, the shooting device is focused on the head of the preset target person.
For example, when the face feature of the preset target person is not included in the face detection region, a contrast focusing method or a phase focusing method may be adopted to focus the shooting device that acquires the current video frame according to the head feature.
For example, when a contrast focusing method is adopted, a circumscribed rectangular frame of a head feature is obtained, then contrast is calculated according to pixels of the circumscribed rectangular frame of the head feature, meanwhile, the moving direction of the lens of the shooting device is determined according to the calculated contrast, and when a position with the maximum contrast is found in the process of repeated movement, focusing of the lens of the shooting device is completed.
For example, when a phase focusing method is adopted, a circumscribed rectangular frame of the head feature is obtained, then a phase difference is calculated according to PDAF pixel points of the circumscribed rectangular frame of the head feature, a focusing offset value is determined according to the phase difference, and then a lens of the shooting device is moved according to the offset value to realize focusing.
Step 304, when the face detection area contains the facial features of a preset target person, detecting whether the facial features comprise human eye features; if not, focusing the shooting equipment for acquiring the current video frame according to the facial features; if yes, focusing the shooting equipment for collecting the current video frame according to the human eye features in the face features.
In some embodiments, when the face detection area contains facial features of a preset target person, it is stated that the preset target person is facing the shooting device in the current video frame, in this case, the face detection area is further determined, whether the facial features in the collected first video frame include human eye features is detected, and if human eye features exist in the facial features, the shooting device collecting the current video frame is focused according to the human eye features, that is, the shooting device is focused on human eyes of the preset target person. And if the human eye features do not exist in the facial features, focusing the shooting equipment for acquiring the current video frame according to the facial features, namely focusing the shooting equipment on the face (human face) of a preset target person.
When the facial features have human eye features, the human eyes of the preset target person are focused, so that the person in the shot video frame is more spiritual, and the imaging effect is improved. For example, when human eye features are present, but the capture device is focused on the mouth or nose of the person, it may appear that the captured video frame is not mental.
For example, when the facial feature includes a human eye feature, a phase focusing method or a contrast focusing method may be adopted to focus the photographing apparatus.
For example, when the contrast focusing method is used, it may be considered that a circumscribed rectangular frame of human eyes of a preset target person is a focusing area, then contrast is calculated according to pixels of the focusing area, a moving direction of a lens of a photographing device is determined according to the calculated contrast, and when a position with the maximum contrast is found in repeated movement, focusing of the lens of the photographing device is completed. .
For example, when the phase focusing method is used, it may be considered that a circumscribed rectangular frame of human eyes of a preset target person is a focusing area, then a phase difference is calculated according to PDAF pixel points corresponding to the focusing area, a focusing offset value is determined according to the phase difference, and then a lens of a shooting device is moved according to the offset value to realize focusing.
In some embodiments, the auto-focusing method of the photographing apparatus further includes: and when the preset target person is not located at the preset position of the current video frame, adjusting the shooting device according to the preset target person so as to enable the preset target person to be located at the preset position of the current video frame.
Specifically, the preset position may refer to a position designated in advance in the present embodiment. The preset position may be a center position of the current video frame, or may be other positions, which is not limited in this application.
And when the preset target person is not located at the preset position of the current video frame, adjusting the shooting equipment to rotate so as to enable the preset target person to be located at the preset position of the current video frame.
In some embodiments, the photographing apparatus may be held by the cradle head to fix the photographing apparatus. During the focusing process of the shooting device, the position of the preset target person in a video frame shot by the shooting device can be adjusted by adjusting the rotation of the holder.
In some embodiments, the photographing apparatus may be adjusted by: calculating the offset of the position of a preset target figure in the current video frame and the preset position; sending a control instruction to the holder according to the offset so that the holder is adjusted according to the control instruction; the control command is generated according to the offset and is used for adjusting the holder.
For example, when the offset amount is smaller than the offset threshold, the phase difference between the position of the preset target person in the current video frame and the preset position may be considered to be small, and the pan-tilt may not be adjusted, or fine adjustment may be performed according to the specific offset amount. That is, in this case, the photographing apparatus may not generate a control instruction to keep the pan/tilt head still; and generating and sending a control instruction to the holder according to the specific offset so as to enable the holder to carry out micro-adjustment according to the control instruction, thereby enabling the preset target person to be at the preset position.
For example, when the offset amount is greater than or equal to the offset threshold value, the position of the preset target person in the current video frame and the preset position may be considered to be out of phase. In this case, the shooting device generates and sends a control instruction to the pan/tilt head according to the specific offset, so that the pan/tilt head adjusts according to the control instruction, thereby enabling the preset target person to be in the preset position.
It should be noted that the adjustment process of the shooting device is performed all the time to ensure that the preset target person is at the preset position of the video frame captured by the shooting device.
Referring to fig. 4, in some embodiments, step 104 includes, but is not limited to, the following steps:
step 402, when the tracking target is the head or the head-shoulder part of a preset target person, in the current video frame, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area.
Step 404, when the tracking target is a human body of a preset target person, selecting a target area according to the area position in the current video frame, and taking the target area as a human face detection area; wherein the first size is larger than the second size of the target area.
Specifically, in this embodiment, the head region may refer to a region corresponding to the head, such as a region covered by a circumscribed rectangular frame of the head. The head-shoulder area may refer to an area corresponding to the head and the shoulder, such as an area covered by a circumscribed rectangular frame of the head-shoulder portion.
When the tracking target is the head of a preset target person, determining a circumscribed rectangular frame corresponding to the head according to the region position and the first size in the current video frame, taking the circumscribed rectangular frame as a head region corresponding to the head, and taking the head region as a face detection region.
When the tracked target is the head-shoulder part of the preset target person, determining a circumscribed rectangular frame corresponding to the head-shoulder part according to the region position and the first size in the current video frame, taking the circumscribed rectangular frame as the head-shoulder region corresponding to the head-shoulder part, and taking the head-shoulder region as the face detection region.
When the tracking target is a human body of a preset target figure, a part above the human body is selected to obtain a target area according to the area position in the current video frame, and the circumscribed rectangle frame of the part is used as a human face detection area.
For example, a quarter of the upper part of the human body is selected to obtain a target area, and the target area is used as a face detection area, so that the first size is four times the second size of the target area. Similarly, a part of the upper third of the human body may be selected as the target area, or a part of the upper half of the human body may be selected as the target area.
Referring to fig. 5, some embodiments of the present application provide an auto-focusing method for a photographing apparatus, including but not limited to the following steps:
step 502, responding to the target tracking instruction, detecting the second video frame to obtain a tracking target.
Step 504, detecting the coverage area of the tracking target corresponding to each person in the first video frame; and marking the person corresponding to the coverage area with the largest first size in each coverage area as a preset target person.
Step 506, performing target tracking on the preset target person to obtain a region position and a first size of a tracking target corresponding to the preset target person in the current video frame.
Step 508, when the tracking target is the head or the head-shoulder part of a preset target person, in the current video frame, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area; when the tracking target is a human body of a preset target person, selecting a target area according to the area position in the current video frame, and taking the target area as a human face detection area, wherein the first size is larger than the second size of the target area.
And step 510, carrying out face detection on the face detection area to obtain a detection result.
In some embodiments, a preset face detection algorithm is used to perform face detection on the face detection area, so as to obtain a detection result. If the facial features of the preset target person are detected, the step 414 is skipped, and if the detection result indicates that the face detection area does not contain the facial features of the preset target person, the step 512 is skipped.
And step 512, when the face detection area does not contain the facial features of the preset target person, focusing the shooting equipment for acquiring the current video frame according to the head features.
Step 514, when the face detection area contains the facial features of a preset target person, detecting whether the facial features comprise human eye features; if not, focusing the shooting equipment for acquiring the current video frame according to the facial features; if yes, focusing the shooting equipment for collecting the current video frame according to the human eye features in the face features.
Illustratively, when the facial features include human eye features, then a phase focusing method or a contrast focusing method may be adopted to focus the photographing apparatus.
For example, when using the contrast focusing method, it may be considered that a circumscribed rectangular frame of a human eye of a preset target person is a focusing region, then a contrast is calculated according to pixels of the focusing region, then a moving direction of the lens of the photographing apparatus is determined according to the calculated contrast, and when a position with a maximum contrast is found in repeated movement, focusing of the lens of the photographing apparatus is completed. .
For example, when the phase focusing method is used, it may be considered that a circumscribed rectangular frame of human eyes of a preset target person is a focusing area, then a phase difference is calculated according to PDAF pixel points corresponding to the focusing area, a focusing offset value is determined according to the phase difference, and then a lens of a shooting device is moved according to the offset value to realize focusing.
The specific steps of steps 502 to 514 may refer to the embodiments of fig. 1 to 4.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an auto-focusing device of a shooting device for implementing the above-mentioned auto-focusing method of the shooting device.
In some embodiments, as shown in fig. 6, there is provided an auto-focusing apparatus of a photographing device, including: a target tracking module 602, a face detection area determination module 604, a face detection module 606, and a focus module 608, wherein:
the target tracking module 602 is configured to perform target tracking on a preset target person to obtain a region position and a first size of a tracking target in a current video frame, where the tracking target corresponds to the preset target person.
And a face detection region determining module 604, configured to determine a face detection region in the current video frame according to the region position and the first size.
And the face detection module 606 is configured to perform face detection on the face detection area to obtain a detection result.
The focusing module 608 is configured to select a corresponding focusing parameter according to the detection result, and focus the shooting device that acquires the current video frame based on the focusing parameter; and when the detection results are different, the selected focusing parameters are different.
In some embodiments, a plurality of persons are included in the current video frame, and the auto-focusing device of the photographing apparatus further includes:
and the center position determining module is used for determining the center position of the first video frame.
And the first coverage area detection module is used for detecting the coverage area of the tracking target corresponding to each person in the first video frame.
And the central point calculating module is used for calculating the central point of the coverage area.
And the distance calculation module is used for calculating the distance between the central position and the central point corresponding to each tracking target.
The first marking module is used for marking the person corresponding to the minimum distance in the plurality of distances as a preset target person.
In some embodiments, the auto-focusing device of the photographing apparatus further includes, but is not limited to:
the second marking module is used for detecting the coverage area of the tracking target corresponding to each person in the first video frame; and marking the person corresponding to the coverage area with the largest first size in each coverage area as a preset target person.
And the third marking module is used for responding to the input target selection instruction and marking the person specified by the target selection instruction in the first video frame as a preset target person.
In some embodiments, the focusing parameters are parameters representing focusing using head features, facial features, or human eye features, and the focusing module 608 includes:
and the focusing unit is used for selecting focusing parameters for focusing by utilizing the head features, the face features or the human eye features based on the detection result and focusing the shooting equipment for acquiring the current video frame according to the head features, the face features or the human eye features.
In some embodiments, the focusing unit includes:
the first focusing subunit is used for focusing the shooting equipment for acquiring the current video frame according to the head characteristics when the face detection area does not contain the facial characteristics of a preset target person;
the second focusing subunit is used for detecting whether the facial features comprise human eye features or not when the facial features of a preset target person are contained in the human face detection area; if not, focusing the shooting equipment for acquiring the current video frame according to the facial features; if yes, focusing the shooting equipment for collecting the current video frame according to the human eye features in the face features.
In some embodiments, the auto-focusing apparatus of the photographing apparatus further includes:
and the adjusting module is used for adjusting the shooting equipment according to the preset target person when the preset target person is not at the preset position of the current video frame, so that the preset target person is at the preset position of the current video frame.
In some embodiments, the face detection region determination module 604 includes:
and the first determining unit is used for taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area in the current video frame when the tracking target is the head or the head-shoulder part of the preset target person.
The second determining unit is used for selecting a target area according to the area position in the current video frame when the tracking target is a human body of a preset target person, and taking the target area as a human face detection area; wherein the first size is larger than the second size of the target area.
The modules in the automatic focusing device of the shooting device can be wholly or partially realized by software, hardware and a combination thereof. The modules may be embedded in a hardware form or may be independent of a processor in the electronic device, or may be stored in a memory in the electronic device in a software form, so that the processor calls and executes operations corresponding to the modules.
In some embodiments, an electronic device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The electronic device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. The communication interface, the input device and the display screen of the electronic equipment are connected with the system bus through the I/O interface. The processor of the electronic device is used to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an auto-focus method of a photographing apparatus. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is provided an electronic device comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program implementing the steps of:
carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame; determining a face detection area in the current video frame according to the area position and the first size; carrying out face detection on the face detection area to obtain a detection result; selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for acquiring the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
determining a center position of a first video frame; detecting a coverage area of a tracking target corresponding to each person in a first video frame; calculating the central point of the coverage area; calculating the distance between the central position and the central point corresponding to each tracking target; and marking the character corresponding to the minimum distance in the plurality of distances as a preset target character.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
detecting a coverage area of a tracking target corresponding to each person in a first video frame; marking the figure corresponding to the coverage area with the largest first size in each coverage area as a preset target figure; or in response to the input target selection instruction, marking the person specified by the target selection instruction in the first video frame as a preset target person.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
and selecting focusing parameters for focusing by utilizing the head features, the face features or the human eye features based on the detection result, and focusing the shooting equipment for collecting the current video frame according to the head features, the face features or the human eye features.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
when the face detection area does not contain the facial features of a preset target person, focusing the shooting equipment for collecting the current video frame according to the head features; when the face detection area contains the facial features of a preset target person, detecting whether the facial features comprise human eye features or not; if not, focusing the shooting equipment for acquiring the current video frame according to the facial features; if yes, focusing the shooting equipment for collecting the current video frame according to the human eye features in the face features.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
and when the preset target person is not at the preset position of the current video frame, adjusting the shooting equipment according to the preset target person so as to enable the preset target person to be at the preset position of the current video frame.
In some embodiments, the processor, when executing the computer program, further performs the steps of:
when the tracking target is the head or the head-shoulder part of a preset target person, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area in the current video frame; when the tracking target is a human body of a preset target figure, selecting a target area according to the area position in the current video frame, and taking the target area as a human face detection area; wherein the first size is larger than the second size of the target area.
In some embodiments, there is provided a computer readable storage medium on which a computer program is stored, the computer program when executed by a processor implementing the steps of:
carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame; determining a face detection area in the current video frame according to the area position and the first size; carrying out face detection on the face detection area to obtain a detection result; selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for acquiring the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a center position of a first video frame; detecting a coverage area of a tracking target corresponding to each person in a first video frame; calculating the central point of the coverage area; calculating the distance between the central position and the central point corresponding to each tracking target; and marking the character corresponding to the minimum distance in the plurality of distances as a preset target character.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting a coverage area of a tracking target corresponding to each person in a first video frame; marking the figure corresponding to the coverage area with the largest first size in each coverage area as a preset target figure; or in response to the input target selection instruction, marking the person specified by the target selection instruction in the first video frame as a preset target person.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and selecting focusing parameters for focusing by using the head features, the face features or the human eye features based on the detection result, and focusing the shooting equipment for acquiring the current video frame according to the head features, the face features or the human eye features.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the face detection area does not contain the facial features of a preset target person, focusing the shooting equipment for collecting the current video frame according to the head features; when the face detection area contains the facial features of a preset target person, detecting whether the facial features comprise human eye features or not; if not, focusing the shooting equipment for acquiring the current video frame according to the facial features; if yes, focusing the shooting equipment for collecting the current video frame according to the human eye features in the face features.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and when the preset target person is not at the preset position of the current video frame, adjusting the shooting equipment according to the preset target person so as to enable the preset target person to be at the preset position of the current video frame.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the tracking target is the head or the head-shoulder part of a preset target person, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area in the current video frame; when the tracking target is a human body of a preset target figure, selecting a target area according to the area position in the current video frame, and taking the target area as a human face detection area; wherein the first size is larger than the second size of the target area.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame; determining a face detection area in the current video frame according to the area position and the first size; carrying out face detection on the face detection area to obtain a detection result; selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for acquiring the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a center position of a first video frame; detecting a coverage area of a tracking target corresponding to each person in a first video frame; calculating the central point of the coverage area; calculating the distance between the central position and the central point corresponding to each tracking target; and marking the character corresponding to the minimum distance in the plurality of distances as a preset target character.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting a coverage area of a tracking target corresponding to each person in a first video frame; marking the figure corresponding to the coverage area with the largest first size in each coverage area as a preset target figure; or in response to the input target selection instruction, marking the person specified by the target selection instruction in the first video frame as a preset target person.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and selecting focusing parameters for focusing by utilizing the head features, the face features or the human eye features based on the detection result, and focusing the shooting equipment for collecting the current video frame according to the head features, the face features or the human eye features.
In one embodiment, the computer program when executed by the processor further performs the steps of: and when the face detection region does not contain the facial features of the preset target person, focusing the shooting equipment for collecting the current video frame according to the head features. When the face detection area contains the facial features of a preset target person, detecting whether the facial features comprise human eye features or not; if not, focusing the shooting equipment for acquiring the current video frame according to the facial features; if yes, focusing the shooting equipment for collecting the current video frame according to the human eye features in the face features.
In one embodiment, the computer program when executed by the processor further performs the steps of: and when the preset target person is not at the preset position of the current video frame, adjusting the shooting equipment according to the preset target person so as to enable the preset target person to be at the preset position of the current video frame.
In one embodiment, the computer program when executed by the processor further performs the steps of: and when the tracking target is the head or the head-shoulder part of a preset target person, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area in the current video frame. When the tracking target is a human body of a preset target figure, selecting a target area according to the area position in the current video frame, and taking the target area as a human face detection area; wherein the first size is larger than the second size of the target area.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. An auto-focusing method of a photographing apparatus, the method comprising:
carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame;
determining a face detection area in the current video frame according to the area position and the first size;
carrying out face detection on the face detection area to obtain a detection result;
selecting corresponding focusing parameters according to the detection result, and focusing the shooting equipment for collecting the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
2. The method of claim 1, wherein the current video frame includes a plurality of people; before the target tracking of the preset target person, the method further comprises:
determining a center position of a first video frame;
detecting a coverage area of a tracking target corresponding to each person in the first video frame;
calculating a center point of the coverage area;
calculating the distance between the central position and the central point corresponding to each tracking target;
and marking the person corresponding to the minimum distance in the plurality of distances as the preset target person.
3. The method of claim 1, wherein the current video frame includes a plurality of people; before the target tracking of the preset target person, the method further comprises:
detecting a coverage area of a tracking target corresponding to each person in a first video frame; marking the figure corresponding to the coverage area with the largest first size in each coverage area as the preset target figure; or,
in response to an input target selection instruction, marking the person specified by the target selection instruction in the first video frame as the preset target person.
4. The method according to claim 1, wherein the focusing parameter is a parameter for representing focusing with a head feature, a face feature, or a human eye feature;
the selecting corresponding focusing parameters according to the detection result and focusing the shooting equipment for collecting the current video frame based on the focusing parameters comprises the following steps:
and selecting focusing parameters for focusing by utilizing head features, facial features or human eye features based on the detection result, and focusing the shooting equipment for collecting the current video frame according to the head features, the facial features or the human eye features.
5. The method of claim 4, wherein focusing a camera device that captures the current video frame according to the head feature, the facial feature, or the eye feature comprises:
when the face detection region does not contain the facial features of the preset target person, focusing the shooting equipment for collecting the current video frame according to the head features;
when the face detection region contains the facial features of the preset target person, detecting whether the facial features comprise human eye features; if not, focusing the shooting equipment for collecting the current video frame according to the facial features; and if so, focusing the shooting equipment for acquiring the current video frame according to the human eye features in the facial features.
6. The method according to any one of claims 1 to 5, further comprising:
and when the preset target person is not located at the preset position of the current video frame, adjusting the shooting equipment according to the preset target person so as to enable the preset target person to be located at the preset position of the current video frame.
7. The method according to any one of claims 1 to 5, wherein determining a face detection region in the current video frame according to the region position and the first size comprises:
when the tracking target is the head or the head-shoulder part of the preset target person, taking a head area or a head-shoulder area corresponding to the area position and the first size as a face detection area in the current video frame;
when the tracking target is a human body of the preset target person, selecting a target area according to the area position in the current video frame, and taking the target area as a human face detection area; wherein the first size is larger than a second size of the target area.
8. An auto-focusing apparatus of a photographing device, the apparatus comprising:
the target tracking module is used for carrying out target tracking on a preset target person to obtain the area position and the first size of a tracking target corresponding to the preset target person in the current video frame;
a face detection area determining module, configured to determine a face detection area in the current video frame according to the area position and the first size;
the face detection module is used for carrying out face detection on the face detection area to obtain a detection result;
the focusing module is used for selecting corresponding focusing parameters according to the detection result and focusing the shooting equipment for acquiring the current video frame based on the focusing parameters; and when the detection results are different, the selected focusing parameters are different.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202211200070.2A 2022-09-29 2022-09-29 Automatic focusing method and device for shooting equipment, electronic equipment and storage medium Pending CN115550551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211200070.2A CN115550551A (en) 2022-09-29 2022-09-29 Automatic focusing method and device for shooting equipment, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211200070.2A CN115550551A (en) 2022-09-29 2022-09-29 Automatic focusing method and device for shooting equipment, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115550551A true CN115550551A (en) 2022-12-30

Family

ID=84731662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211200070.2A Pending CN115550551A (en) 2022-09-29 2022-09-29 Automatic focusing method and device for shooting equipment, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115550551A (en)

Similar Documents

Publication Publication Date Title
US9813607B2 (en) Method and apparatus for image capture targeting
WO2018201809A1 (en) Double cameras-based image processing device and method
CN108076278B (en) Automatic focusing method and device and electronic equipment
US10289923B2 (en) Image production from video
US20190089910A1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN103685940A (en) Method for recognizing shot photos by facial expressions
US9628700B2 (en) Imaging apparatus, imaging assist method, and non-transitory recoding medium storing an imaging assist program
JP2020053774A (en) Imaging apparatus and image recording method
CN108521862A (en) Method and apparatus for track up
CN115022549B (en) Shooting composition method, shooting composition device, computer equipment and storage medium
WO2019000715A1 (en) Method and system for processing image
CN115550551A (en) Automatic focusing method and device for shooting equipment, electronic equipment and storage medium
CN111726531B (en) Image shooting method, processing method, device, electronic equipment and storage medium
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
US11790483B2 (en) Method, apparatus, and device for identifying human body and computer readable storage medium
US10970901B2 (en) Single-photo generating device and method and non-volatile computer-readable media thereof
CN112995503A (en) Gesture control panoramic image acquisition method and device, electronic equipment and storage medium
US10013736B1 (en) Image perspective transformation system
CN116095462B (en) Visual field tracking point position determining method, device, equipment, medium and product
CN115665555A (en) Automatic exposure method and device for shooting equipment, electronic equipment and storage medium
CN114697545B (en) Mobile photographing system and photographing composition control method
CN116405656A (en) Camera judging method, device, computer equipment and storage medium
CN115334241B (en) Focusing control method, device, storage medium and image pickup apparatus
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium
CN116405768A (en) Video picture area determining method, apparatus, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination