CN116405656A - Camera judging method, device, computer equipment and storage medium - Google Patents

Camera judging method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116405656A
CN116405656A CN202310253149.XA CN202310253149A CN116405656A CN 116405656 A CN116405656 A CN 116405656A CN 202310253149 A CN202310253149 A CN 202310253149A CN 116405656 A CN116405656 A CN 116405656A
Authority
CN
China
Prior art keywords
preset
terminal
camera
picture
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310253149.XA
Other languages
Chinese (zh)
Inventor
张伟俊
刘靖康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202310253149.XA priority Critical patent/CN116405656A/en
Publication of CN116405656A publication Critical patent/CN116405656A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to a camera judging method, a camera judging device, computer equipment, a storage medium and a computer program product. The method comprises the following steps: in the process of driving a terminal to rotate in different preset directions sequentially through a cradle head, recording videos shot by the terminal by using a camera to obtain video pictures; determining picture motion information according to the characteristic change information of the video picture; generating a picture parameter sequence of the terminal when the terminal rotates according to the picture motion information; and determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence. Through the scheme, the camera orientation type of the camera for video picture acquisition relative to the terminal can be judged under the condition that the camera is occupied by other application programs.

Description

Camera judging method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for determining a camera, a computer device, a storage medium, and a computer program product.
Background
When a user shoots through a cell phone, tablet computer or other intelligent device, a built-in video shooting application program or other application programs occupying a camera may be started.
If the application program without the tracking function occupies the camera, the video data shot by the camera called by the application program is collected, and the target is identified on the collected video data. In this case, it is accurately determined that the video data is acquired by a front camera, a rear camera, or other camera orientation type camera, so that the target tracking can be realized.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a camera orientation determining method, apparatus, computer device, computer readable storage medium, and computer program product, in which a camera capable of accurately capturing video data is a front camera, a rear camera, or other camera orientation type camera.
In a first aspect, the present application provides a method for determining a camera. The method comprises the following steps:
in the process of driving a terminal to rotate in different preset directions sequentially through a cradle head, recording videos shot by the terminal by using a camera to obtain video pictures;
Determining picture motion information according to the characteristic change information of the video picture;
generating a picture parameter sequence of the terminal when the terminal rotates according to the picture motion information;
and determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence.
In one embodiment, the preset directions include a first preset direction, a second preset direction, a third preset direction and a fourth preset direction.
In one embodiment, in the process of driving the terminal to rotate in different preset directions sequentially through the cradle head, recording the video shot by the terminal by using the camera, the method includes:
according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by preset angles towards the first preset direction, the second preset direction, the third preset direction and the fourth preset direction respectively;
recording a video picture displayed on the terminal in the process that the cloud deck drives the terminal to rotate; the video picture is a picture of the video which is shot by the terminal by using a camera.
In one embodiment, the picture motion information includes: the cloud platform drives the terminal to rotate towards the first preset direction, the second preset direction, the third preset direction and the third preset direction respectively for a movement duration in the process of rotating the preset angle.
In one embodiment, the first preset direction and the second preset direction are opposite directions, and the third preset direction and the fourth preset direction are opposite directions.
In one embodiment, according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by a preset angle towards the first preset direction, the second preset direction, the third preset direction and the fourth preset direction respectively, and the method includes:
the terminal is driven by the cradle head to rotate towards a first preset direction according to a first preset angle;
the terminal is driven by the cradle head to rotate towards the second preset direction according to the first preset angle;
the terminal is driven by the cradle head to rotate towards the third preset direction according to a second preset angle;
and driving the terminal to rotate towards the fourth preset direction according to the second preset angle through the cradle head.
In one embodiment, the first preset angle is within a longitudinal rotation range of the pan-tilt, the second preset angle is within a transverse rotation range of the pan-tilt, and the first preset angle is smaller than or equal to the second preset angle; or the first preset angle is in the longitudinal rotation range of the terminal, the second preset angle is in the transverse rotation range of the terminal, and the first preset angle is smaller than or equal to the second preset angle.
In one embodiment, the first preset angle is greater than or equal to 20 degrees, and the first preset angle is less than or equal to 30 degrees; the second preset angle is greater than or equal to 30 degrees, and the first preset angle is less than or equal to 40 degrees.
In one embodiment, according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by a preset angle towards the first preset direction, the second preset direction, the third preset direction and the fourth preset direction respectively, and the method includes:
the terminal is driven by the cradle head to rotate towards a first preset direction according to a third preset angle;
the terminal is driven by the cradle head to rotate towards the third preset direction according to the third preset angle;
the terminal is driven by the cradle head to rotate towards the second preset direction according to the third preset angle;
and controlling the cradle head to drive the terminal to rotate towards the fourth preset direction according to the third preset angle.
In one embodiment, the determining picture motion information according to the feature variation information of the video picture includes:
combining video pictures recorded by different time stamps to obtain each video picture pair;
And in each video picture pair, carrying out optical flow calculation on the pixel points to obtain picture motion information.
In one embodiment, a pixel for optical flow calculation includes: each pixel point in each video picture pair, or each pixel point in each video picture pair downsampled graph.
In one embodiment, in each of the video frame pairs, performing optical flow calculation on the pixel points to obtain frame motion information, including:
respectively determining the pixel point speeds in each video picture pair; the pixel speed comprises the pixel motion direction of each video picture;
the generating a picture parameter sequence of the terminal when rotating according to the picture motion information comprises the following steps:
determining the pixel movement duration of each preset direction based on the pixel movement direction of each video picture and the time stamp;
and combining the pixel point movement time lengths in the preset directions according to the recording time sequence of the video picture to obtain a pixel point parameter sequence when the terminal rotates.
In one embodiment, the terminal comprises a front camera and a rear camera; the determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence comprises the following steps: and determining that the camera is a front camera or a rear camera of the terminal according to the preset motion trend which is met by the picture parameter sequence.
In one embodiment, the picture parameter sequence includes a pixel parameter sequence, where the pixel parameter sequence includes a sequence of pixel directions in which a motion duration of a pixel in each of the preset directions is aligned with a motion duration of a pixel in each of the preset directions.
In one embodiment, the determining, according to the preset motion trend according to the picture parameter sequence, that the camera is a front camera or a rear camera of the terminal includes:
determining that the motion duration of the pixel points in the preset direction meets the duration condition of each preset direction, and the sequence of the pixel directions accords with the target pixel points in the sequence of the preset directions;
and determining that the camera is a front camera or a rear camera of the terminal according to the target pixel point.
In one embodiment, the preset motion trend includes: at least one of a proactive movement trend and a proactive movement trend is preset, and the proactive movement trend are in reverse order.
In one embodiment, the determining, according to the preset motion trend according to the picture parameter sequence, that the camera is a front camera or a rear camera of the terminal includes:
Counting the number of front shooting pixel points which accord with the front shooting motion trend and rear shooting pixel points which accord with the rear shooting motion trend in the pixel points of the video picture to obtain the number of front shooting pixel points and the number of rear shooting pixel points;
if the number of the front camera pixels is larger than the number of the rear camera pixels, determining that the camera is a front camera of the terminal;
and if the number of the front camera pixels is smaller than that of the rear camera pixels, determining that the camera is a rear camera of the terminal.
In one embodiment, the recording the video shot by the terminal using the camera to obtain a video picture includes:
in the process of driving a terminal to rotate a preset angle towards different preset directions through a cradle head, acquiring each frame of screen picture of the terminal in the motion process;
detecting pixel difference information of the screen picture of each frame;
determining a variation value of the pixel difference information;
determining picture recording areas in the screen pictures of each frame according to the change values;
and recording the picture recording area to obtain a video picture.
In one embodiment, before the terminal is driven to rotate in different preset directions by the cradle head, the method includes:
And if the video picture is in the standby mode of target object recording, displaying the prompt information of the video picture in the standby mode until the video picture recording mode is determined to be the target object recording mode when responding to the judging instruction of the camera.
In one embodiment, after the determining that the camera is a front camera or a rear camera of the terminal, the method further includes:
determining the position of a target object in the video picture;
determining picture acquisition parameters of the camera according to the position of the target object;
the terminal is driven to rotate according to the picture acquisition parameters by the cradle head;
recording a video picture shot by the terminal by using a camera in the process of rotating according to the picture acquisition parameters; and the target object is at a preset picture position in a video picture recorded according to picture acquisition parameters.
In a second aspect, the application further provides a device for judging the camera. The device comprises:
the terminal motion module is used for recording videos shot by the terminal through the camera in the process of driving the terminal to rotate in different preset directions through the cradle head in sequence, so as to obtain video pictures;
The information acquisition module is used for determining picture motion information according to the characteristic change information of the video picture;
the parameter generation module is used for generating a picture parameter sequence of the terminal when the terminal rotates according to the picture motion information;
and the camera determining module is used for determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence.
In a third aspect, the present application further provides a handheld cradle head, including a processor, where the processor is configured to implement the step of determining the camera in any of the foregoing embodiments when executing the computer program.
In a fourth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of determining the camera in any of the embodiments described above when the processor executes the computer program.
In a fifth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of determining a camera in any of the embodiments described above.
In a sixth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of determining a camera in any of the embodiments described above.
According to the judging method, the judging device, the computer equipment, the storage medium and the computer program product of the camera, the terminal is driven to rotate in different preset directions sequentially through the cradle head, so that the video shot by other application programs by the camera is changed, and because the other application programs occupy the camera, the video shot by the camera by the terminal is recorded, and video pictures in the rotating process of the terminal in different preset directions are obtained; determining picture motion information according to the characteristic change information of the video picture; sequentially combining picture motion information to form a picture parameter sequence of the terminal when the terminal rotates, and accurately determining the camera orientation type of the camera relative to the terminal according to a preset motion trend which is met by the picture parameter sequence; the camera orientation type includes, but is not limited to, a front camera or a rear camera of the terminal.
Drawings
FIG. 1 is an application environment diagram of a method for determining a camera in an embodiment;
FIG. 2 is a flowchart of a method for determining a camera in an embodiment;
FIG. 3 is an interface diagram of a video frame in one embodiment;
FIG. 4 is an interface diagram of a video frame in one embodiment;
FIG. 5 is a flowchart of a camera determination method in an application scenario in one embodiment;
FIG. 6 is a block diagram of a camera judgment device in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The method for judging the camera provided by the embodiment of the application can be applied to an application environment shown in fig. 1. The terminal 102 may be, but not limited to, various cameras, video cameras, panoramic cameras, motion cameras, personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, which may be smart watches, smart bracelets, headsets, etc. The terminal 102 may be fixed to the holder body by welding or the like, and may also be detachably connected or rotatably connected to the holder body.
In one embodiment, as shown in fig. 2, a method for determining a camera is provided, and the method is applied to the terminal 102 in fig. 1 for illustration, and includes the following steps:
step 202, in the process of driving the terminal to rotate in different preset directions sequentially through the cradle head, recording videos shot by the terminal through the camera to obtain video pictures.
The preset direction is one direction of the directions of the turntable, and each preset direction has relevance with the preset movement trend. And judging whether the video picture accords with the preset motion trend or not according to the sequence obtained by combining the motion time length of the video picture in each preset direction and the motion time length of each preset direction. The direction of the cradle head turntable is the rotation direction of the mounting platform where the terminal is located in the cradle head, and the movement of the terminal is controlled by various rotations or deformations of all parts of the cradle head so that the video picture generates characteristic change information.
The video picture is obtained by recording the video shot by the application when the video is shot by the camera of the terminal occupied by the application program of the terminal. Because the application program is occupying the camera of the terminal and cannot directly acquire the content shot by the camera, a screen picture is acquired through a screen recording or the like, a picture recording area in the screen picture is determined, and then the video picture is acquired through the picture recording area. The video frame may be at least part of the screen frame, or may be directly transmitted, stored, or displayed on other electronic devices.
In one embodiment, in a process of driving a terminal to rotate in different preset directions sequentially through a cradle head, recording video shot by the terminal by using a camera, and obtaining a video picture includes: responding to a camera judging instruction sent by the cradle head, controlling the cradle head by the terminal to drive the terminal to rotate in different preset directions in sequence, and recording videos shot by the terminal by using the camera through a screen recording thread of the terminal; the camera judging instruction may be an object tracking instruction and may also be a video picture judging instruction, and the camera judging instruction, the object tracking instruction and the video picture judging instruction may be the same data or different data.
In one embodiment, the relationship between the screen and the video is as shown in fig. 3 (a) or fig. 3 (b), where fig. 3 (a) is a small window video shot by the application program through the front camera, and fig. 3 (b) is a small window video shot by the application program through the rear camera; recording a video shot by a terminal by using a camera to obtain a video picture, wherein the method comprises the following steps of: in the process of driving the terminal to rotate by preset angles in different preset directions through the cradle head, acquiring each frame of screen picture of the terminal in the motion process; detecting pixel difference information of each frame of screen picture; determining a variation value of the pixel difference information; respectively determining picture recording areas in each frame of screen picture according to the change values; and recording the video picture in the picture recording area to obtain the video picture.
The screen frame includes a frame displayed on a screen of the terminal in the rotation process, the screen frame includes at least part of a video frame, and the screen frame can also include a frame of a system desktop frame or other application programs of the terminal. In case that the camera of the terminal is occupied by other applications, it is necessary to determine a relationship between the video picture and the screen picture, which can be determined by pixel difference information of each frame of the screen picture.
Pixel difference information of each frame of screen picture is used for judging whether pixels in each frame of screen picture have differences or not; the pixel difference information includes, but is not limited to, pixel color channel differences, pixel brightness differences, etc. at corresponding positions of each frame of screen.
The change value of the pixel difference information is used for representing the change degree of the pixel points in the screen picture during rotation. If the variation value of a pixel point is larger, the variation information of the pixel point in the rotation period fluctuates more, which is more likely to be the pixel point in the picture recording area; if the variation value of a pixel point is smaller, the variation information of the pixel point in the rotation period fluctuates less, and the pixel point is positioned in other types of picture areas in the screen picture. Other types of screens include a desktop screen area, an application screen area.
The picture recording area is an area in which the video shot by the camera is displayed in a screen picture when the camera shoots the video; the screen recording area may be an area of interest divided from the screen by a square, circle, oval, irregular polygon, or other shape. And recording the picture recording area to obtain the video picture.
The picture recording area can be used for determining an area for target detection of a target object in a tracking initialization flow; in the process of determining target object tracking, in the area for target tracking, the method is used for determining how the target object tracking process and the cradle head control signal are calculated. For example, a typical control strategy is to center the target in the camera lens area, and then determine the area of the lens shot known as the image recording area, so as to calculate the current offset to determine the pan-tilt strategy of the target object tracking process.
Illustratively, in a scene in which a user performs a game and video shooting synchronously through a terminal, the screen includes a game screen and a video screen; even though the pixel difference information of the game picture and the video picture may be similar, the game picture does not generate a larger change value due to the rotation of the cradle head, and the video picture changes along with different objects shot by the camera, so that the picture recording area in the screen picture can be determined according to the change value of the pixel difference information, and the picture recording area is the area where the video picture is located in the screen picture.
Acquiring each frame of screen picture of the terminal in the motion process, and recording the full screen in a screen recording mode; detecting pixel difference information of each frame of screen picture; determining a variation value of the pixel difference information; whether the pixels are changed or not is determined according to the pixel difference information, the pixels with changed difference information are determined from the pixels of the full-screen picture, and the number of the processed pixels is reduced; and respectively determining picture recording areas in each frame of screen picture according to the change values, determining picture recording areas in the screen picture with smaller data quantity, and recording pictures in the picture recording areas to obtain video pictures.
In one embodiment, the preset directions include a first preset direction, a second preset direction, a third preset direction, and a fourth preset direction.
The first preset direction, the second preset direction, the third preset direction and the fourth preset direction are four different movement directions, and the four different movement directions are used for controlling the terminal to rotate relatively stably relative to a certain position; the moving direction may refer to a cradle head, or may refer to a moving direction of a mobile phone or a camera mounted on the cradle head; illustratively, the first preset direction and the second preset direction are respectively a Pitch direction (Pitch) and a Roll direction (Roll), so that if the mobile phone is used for transporting the lens in the Pitch direction by 25 degrees, the lens can be kept stable by simultaneously moving the lens in the Pitch direction and the Roll direction of the cradle head. In the process of driving the terminal to rotate in four preset directions sequentially through the cradle head, the change degree of the video picture is relatively moderate, so that the characteristic change information of the video picture is enhanced, and the method is beneficial to efficiently determining the picture motion information. And recording the video picture displayed by the terminal so as to feed back the change state of the video picture in the rotating process in real time.
In one embodiment, in a process of driving a terminal to rotate in different preset directions sequentially through a cradle head, recording videos shot by the terminal by using a camera, the method comprises the following steps: according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by preset angles towards a first preset direction, a second preset direction, a third preset direction and a fourth preset direction respectively; recording a video picture displayed on the terminal in the process of driving the terminal to rotate by the cradle head; the video picture is a picture of the video that the terminal is taking with the camera.
The preset rotation sequence and rotation angle of the first preset direction, the second preset direction, the third preset direction and the fourth preset direction can adopt various preset path movements or the combination of the preset path movements, and the various preset path movements can be the process of the first preset path movement or the process of the second preset path movement. In the process of driving the terminal to rotate a preset angle towards four preset directions in sequence through the cradle head, the change degree of the video picture is more moderate, the characteristic change information of the video picture is enhanced again, and the method is beneficial to more efficiently determining picture motion information.
Wherein the picture motion information includes: the cloud platform drives the terminal to rotate towards the first preset direction, the second preset direction, the third preset direction and the third preset direction respectively for a movement duration in the process of rotating the preset angle. Therefore, according to the characteristic change information of the video picture, the movement time length of the cradle head driving the terminal in the process of rotating the terminal by preset angles towards four preset directions is determined, and picture parameter sequences are orderly combined by the movement time lengths of the four preset directions so as to accurately judge whether the camera shot by the terminal is a front camera or a rear camera.
In one embodiment, the first preset direction and the second preset direction are opposite directions, and the third preset direction and the fourth preset direction are opposite directions. The first preset direction is a direction of upward movement of the camera through the pan-tilt control terminal, and the second preset direction is a direction of downward movement of the camera through the pan-tilt control terminal; the third preset direction is the direction of the camera of the pan-tilt control terminal rotating leftwards, and the fourth preset direction is the direction of the camera of the pan-tilt control terminal rotating rightwards. Therefore, the rotation in the opposite directions is sequentially carried out through the cradle head, the characteristic change information of the video picture can be accurately extracted from the video picture shot in the rotation process, and the camera is accurately judged to be a front camera or a rear camera of the terminal through the characteristic change information.
In one embodiment, the process of the first preset path movement is described. In the process of adopting the first preset path movement, the video pictures are sequentially shown in fig. 4 (a) to 4 (e); correspondingly, according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by preset angles towards a first preset direction, a second preset direction, a third preset direction and a fourth preset direction respectively, and the method comprises the following steps: the terminal is driven by the cradle head to rotate towards a first preset direction according to a first preset angle; the terminal is driven by the cradle head to rotate towards a second preset direction according to a first preset angle; the terminal is driven by the cradle head to rotate towards a third preset direction according to a second preset angle; the terminal is driven by the cradle head to rotate towards a fourth preset direction according to a second preset angle.
Optionally, before the terminal is driven by the cradle head to rotate towards a first preset direction according to a first preset angle, the method includes: if the terminal is not at the preset position, controlling the movement to move the terminal to the preset position, so that the rotation range of the cradle head is not exceeded in the process of driving the terminal to rotate by the cradle head; the preset position may be a center position of the rotation range.
In one possible implementation manner, the terminal is driven by the cradle head to rotate towards a first preset direction according to a first preset angle, including: responding to a camera judging instruction, and driving a terminal at a preset position to rotate a first preset angle towards a first preset direction through a cradle head; the camera judging instruction may be an object tracking instruction, and may also be a video frame judging instruction.
In one possible implementation manner, the terminal is driven by the cradle head to rotate towards a second preset direction according to a first preset angle, including: after the terminal is driven to rotate by a first preset angle towards a first preset direction by the cradle head, the terminal is driven to rotate by a first preset angle towards a second preset direction by the cradle head, so that the cradle head drives the terminal to reset to a preset position through the first preset angle; in the process of rotating in the second preset direction, the rotating table can be reset directly through the turntable reset device or can be obtained by rotating again.
In one possible implementation manner, the terminal is driven by the cradle head to rotate towards a third preset direction according to a second preset angle, including: after the terminal is driven to rotate by the first preset angle towards the second preset direction through the cradle head, the terminal is driven to rotate by the second preset angle towards the third preset direction through the cradle head.
In one possible implementation manner, the terminal is driven by the cradle head to rotate towards a fourth preset direction according to a second preset angle, including: after the terminal is driven to rotate by the second preset angle towards the third preset direction by the cradle head, the terminal is driven to rotate by the second preset angle towards the fourth preset direction by the cradle head, so that the cradle head drives the terminal to reset to the preset position through the second preset angle.
The camera is judged to be a front camera or a rear camera of the terminal through the characteristic change information, and the movement rotation time can be reduced.
In the process of the first preset path movement, a first preset angle and a second preset angle are related; the first preset angle is larger than or equal to 20 degrees, and the first preset angle is smaller than or equal to 30 degrees; the second preset angle is greater than or equal to 30 degrees, and the first preset angle is less than or equal to 40 degrees.
When the first preset angle is 20 degrees, the movement duration of the cloud platform in the first preset direction and the second preset direction is short, and after the data volume of the characteristic change information of the video picture is calculated, the judgment of the camera can be carried out; when the first preset angle is 30 degrees, and after the data quantity of the characteristic change information of the video picture is calculated, the camera can be more accurately judged to be a front camera or a rear camera.
When the second preset angle is 30 degrees, the movement duration of the cloud platform in the third preset direction and the fourth preset direction is shorter, and after the data quantity of the characteristic change information of the video picture is calculated, the judgment of the camera can be carried out; when the second preset angle is 40 degrees, and after the data quantity of the characteristic change information of the video picture is calculated, the camera can be more accurately judged to be a front camera or a rear camera.
In certain exemplary embodiments of the first preset path movement, the first preset angle is 25 degrees and the second preset angle is 35 degrees; correspondingly, the process of adopting the first preset path motion comprises the following steps: the cradle head moves upwards by 25 degrees, moves downwards by 35 degrees, moves leftwards by the lens returning, and moves rightwards by the lens returning, in the exemplary embodiment, the time spent for one lens moving is about 700ms, the total time spent for 2.8s, and in the exemplary embodiment, the time spent for lens moving is less on the premise of accurately judging the camera.
Optionally, the first preset angle is within a longitudinal rotation range of the pan-tilt, and the second preset angle is within a transverse rotation range of the pan-tilt, and the first preset angle is smaller than or equal to the second preset angle. The longitudinal rotation range and the transverse rotation range of the cradle head are limited respectively, the transverse rotation range is smaller, the degree setting is smaller, the cradle head can be prevented from being limited, and the time consumption is as small as possible through the setting that the first preset angle is smaller than or equal to the second preset angle.
The longitudinal rotation range and the transverse rotation range of the cradle head are the rotation ranges of the cradle head in the mutually perpendicular directions. The rotation range of the mounting platform of the cradle head is determined through the longitudinal rotation range and the transverse rotation range of the cradle head, and the mounting platform of the cradle head is used for mounting a terminal.
Optionally, the first preset angle is within a terminal longitudinal rotation range, the second preset angle is within a terminal transverse rotation range, and the first preset angle is smaller than or equal to the second preset angle. The longitudinal rotation range and the transverse rotation range of the terminal are limited respectively, the transverse rotation range is smaller, the degree setting is smaller, the contact of the cradle head is avoided, and the time consumption is reduced as much as possible through the setting that the first preset angle is smaller than or equal to the second preset angle.
Taking a common Yaw direction (Yaw), a Roll direction (Roll) and a Pitch direction (Pitch) as examples, the longitudinal rotation range of the pan head is a Pitch direction range, and the transverse rotation range of the pan head is a Roll direction range or a Yaw direction range; correspondingly, the longitudinal rotation range of the terminal is determined by the longitudinal rotation range of the holder and the transverse rotation range of the holder, and the movement range of the terminal in the pitching direction range; the transverse rotation range of the terminal is determined by the longitudinal rotation range of the cradle head and the movement range of the transverse rotation range of the cradle head, and the terminal is in the transverse rolling direction range or the yaw direction range. The longitudinal rotation range of the pan-tilt is not limited to the pitch direction range, and the transverse rotation range of the pan-tilt is not limited to the roll direction range or the yaw angle range, and can be the rotation range under other postures.
In one embodiment, the process of the second preset path movement is described. According to the preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by preset angles towards a first preset direction, a second preset direction, a third preset direction and a fourth preset direction respectively, and the method comprises the following steps: the terminal is driven by the cradle head to rotate towards a first preset direction according to a third preset angle; the terminal is driven by the cradle head to rotate towards a third preset direction according to a third preset angle; the terminal is driven by the cradle head to rotate towards a second preset direction according to a third preset angle; and controlling the cradle head to drive the terminal to rotate towards a fourth preset direction according to a third preset angle.
The first preset angle, the second preset angle and the third preset angle are different angles of the cradle head turntable, and the third preset angle is between the first preset angle and the second preset angle.
In certain exemplary embodiments of the second preset path movement, the third preset angle is 30 degrees, and the process of using the second preset path movement includes: the cradle head sequentially moves the mirrors upwards, leftwards, downwards and rightwards by 30 degrees, and happens to move the mirrors backwards when the mirrors are moved rightwards by 30 degrees, and the total time consumption is about 1000ms and 4s for one-time mirror movement in the exemplary embodiment.
The cradle head rotates in the first preset direction, the third preset direction, the second preset direction and the fourth preset direction according to the third preset angle, the video picture shot in the rotation process can accurately extract characteristic change information of the video picture, and the camera is judged to be a front camera or a rear camera of the terminal according to the characteristic change information.
Step 204, determining the picture motion information according to the characteristic change information of the video picture.
The characteristic change information is change information generated by video pictures at different moments; alternatively, the feature change information may be obtained by sequentially analyzing pixel difference values of pixel positions based on each continuous video frame, or may be obtained by sequentially analyzing pixel difference values of pixel positions for video frames selected at intervals of time stamps. For example, the feature change information may be obtained by using a feature point matching method, may be obtained by using a feature block matching method, and may be obtained by using an optical flow method.
The picture motion information is motion information during the cradle head turntable, and includes a picture motion direction and a picture motion value at each time. The picture motion values are combined according to the picture motion direction at each moment to form a picture trend parameter sequence of each picture motion direction.
Alternatively, the picture motion information is defined as at least one type of picture optical flow vector according to a difference in picture motion value; when the picture motion value includes a distance of the picture over a certain period of time, the picture optical flow vector includes a picture displacement; when the picture motion value includes a speed of the picture over a certain period of time, the picture optical flow vector includes the picture speed. The picture displacement is calculated by pixel point displacement generated by at least part of pixels in the picture at different moments, and the picture speed is calculated based on pixel point speeds of at least part of pixels in the picture at different moments.
In one possible embodiment, determining picture motion information based on feature variation information of a video picture includes: in a video picture, calculating motion information of each pixel point according to characteristic change information of each pixel point in different video picture frames; and counting the motion information of each pixel point to obtain a counting result, and determining the picture information according to the counting result. Illustratively, the number of pixels at each speed may be obtained by counting the pixels at each speed; determining the maximum pixel point number in the pixel point number of each speed; the speed at which the maximum number of pixels is located is determined as the picture motion speed.
In one embodiment, determining picture motion information based on feature change information of a video picture includes: combining video pictures recorded by different time stamps to obtain each video picture pair; and in each video picture pair, carrying out optical flow calculation on the pixel points to obtain picture motion information.
The pair of video pictures consists of two video pictures with different time stamps, which are spaced apart. Alternatively, the video picture pairs are combined at intervals of a preset time stamp, and each video picture and its forward video picture or its backward video picture constitute a video picture pair. The video picture pair comprises each pixel point for carrying out optical flow calculation according to the corresponding relation of the positions.
In a possible implementation manner, combining video pictures recorded by different time stamps to obtain each video picture pair includes: sequentially determining video pictures recorded by time stamps of different pictures as a first video picture and a second video picture corresponding to the first video picture according to intervals of preset time stamps; the first video picture and the corresponding second video picture form a video picture pair; the second video picture and the first video picture have a preset time stamp interval, and the second video picture can be a forward video picture of the first video picture or a backward video picture of the first video picture. For example, for video picture a, video picture B, and video picture C, which are sequentially arranged, the time stamp interval of the three frames of video pictures is 3 frames at a certain frame rate, that is, 3 frames at the frame rate are used as intervals between video picture a and video picture B, and 3 frames at the frame rate are used as intervals between video picture B and video picture C; at this time, the video picture B may belong to a different video picture pair, which is the second video picture of the video picture a in one video picture pair, and the video picture B may be the first video picture of the video picture C in the other video picture pair.
The optical flow calculation is based on the pixel point position in the video picture, and the motion information change condition of the pixel point in each frame of video picture is extracted. The data quantity and the data processing process involved in optical flow calculation are relatively less; in addition, feature extraction can be performed also for a region where structural information exists, a flat region, a texture rich region, or other image regions.
The pixels used for optical flow calculation include pixels in each video picture pair, or pixels in each video picture pair downsampled graph.
The video picture pair downsampled picture is an image obtained by downsampling each video picture in the video picture pair. Optionally, downsampling each video picture pair according to a preset sampling rate to obtain downsampled pictures of each video picture pair; for example, if the video picture pair has a specification of 1920×1080, each pixel in the video picture pair is each pixel under the specification of 1920×1080; after 4 times downsampling is performed on a video picture pair with a specification of 1920 x 1080, each video picture pair downsampling diagram with a specification of 480 x 270 is obtained, and each pixel point in each video picture pair downsampling diagram is each pixel point under the specification of 480 x 270.
When the pixel points in each video picture pair are all the pixel points in the video picture pair, the picture motion information is calculated based on a dense optical flow mode, the picture motion information in the rotation process is smoother, whether the picture motion information accords with a preset motion trend or not is facilitated to be accurately judged, and the execution process of the judging method of the camera belongs to a small part of the whole video shooting process, so that the data volume processed by the terminal is relatively moderate; when the pixel point in each video picture pair is each pixel point in each video picture pair downsampling diagram, the method is beneficial to reducing at least one calculation parameter of calculated data quantity and time.
In an alternative embodiment, optical flow calculation is performed on pixel points in a picture at each video to obtain picture motion information, including: performing optical flow calculation on pixel points in a picture in each video to obtain optical flow characteristics of each pixel point; and generating picture motion information according to the light flow characteristics of each pixel point. Alternatively, the optical flow characteristics may be calculated by using a conventional Lucas-Kanade optical flow method (Lucas Kanade optical flow algorithm), and various neural network models, and the optical flow characteristics of each pixel point may be sequentially combined according to the direction into a pixel point parameter sequence or other picture parameter sequence when the terminal rotates.
Step 206, generating the picture parameter sequence of the terminal when rotating according to the picture motion information.
The picture parameter sequence is an ordered parameter set obtained by sequentially combining picture motion information of each video picture, and the picture trend parameter sequence can more objectively reflect the motion rule of the video picture on the basis of picture motion information in a single preset direction, so as to judge whether the picture accords with the preset motion trend. The picture parameter sequence is obtained by orderly combining motion information of each video picture shot by the terminal in a rotation process in each preset direction.
In one embodiment, in each video picture pair, performing optical flow calculation on pixel points to obtain picture motion information, including: respectively determining the pixel point speeds in each video picture pair; each pixel speed comprises the pixel movement direction of each video picture;
correspondingly, the picture parameter sequence of the terminal when rotating is generated according to the picture motion information, which comprises the following steps: determining the pixel point movement time length of each preset direction based on the pixel point movement direction and the time stamp of each video picture; and combining the pixel point motion time lengths in all preset directions according to the recording sequence of the video pictures to obtain a pixel point parameter sequence when the terminal rotates.
The pixel velocity is an instantaneous velocity calculated by a conventional optical flow method or a neural network, and can be determined based on time variation of image gray scale. The pixel speed comprises a pixel speed direction, and the pixel speed direction is the pixel movement direction of the pixel in the video picture surface.
The motion direction of the pixels of the video picture is the instantaneous direction of the pixels at the time stamp of the video picture, and the scene motion direction in the video picture is analyzed according to the instantaneous direction of each pixel. Optionally, taking the pixel point with the difference between the motion direction of the pixel point and the reference direction in a preset range as the pixel point of the reference direction, and determining the motion duration of the pixel point of the reference direction through the interval of the pixel point of the reference direction in the video picture pair time stamp; the reference direction is any one of preset directions, including but not limited to a first preset direction, a second preset direction, a third preset direction and a fourth preset direction.
Illustratively, the first video picture and the second video picture are arranged in recording order, and a time stamp between the first video picture and the second video picture is 1/30 second apart; in the video picture pair formed by the first video picture and the second video picture, the pixel point speed comprises the instantaneous speed of the pixel points with the same coordinates in the first video picture time stamp, and the direction of the instantaneous speed is the pixel point motion direction of the first video picture time stamp; the time length of the preset direction to which the pixel moving direction of the first video picture timestamp belongs increases the time stamp interval between the first video picture and the second video picture, namely, increases by 1/30 second, and so on.
The recording sequence of the video pictures is the picture frame sequence of the video pictures, and is used for sequentially combining the pixel point motion time lengths of the pixel points in each preset direction to form a pixel point parameter sequence in each preset direction. The pixel point parameter sequence is the pixel point motion time length of at least part of the pixel points of the video picture in each preset direction in the video picture arranged according to the time stamp or other marks.
In one possible implementation, optical flow calculation is performed on pixels in each video frame pair to obtain a pixel speed, including: respectively determining each pixel point pair corresponding to the position in each video picture pair; and carrying out optical flow calculation according to each pixel point pair to obtain the respective pixel point speed of each pixel point. The pixel speed is an optical flow vector of the pixel, the optical flow vector can form an optical flow diagram of a video picture pair according to the pixel position, each point coordinate in the optical flow diagram corresponds to one video picture in the video picture pair one by one through the pixel position, and the optical flow vector of each pixel is determined through the pixel point pair corresponding to the position.
The pixel point pairs corresponding to the positions comprise: two pixels of associated positions in different video pictures in a video picture pair may be pixels of the same pixel coordinates of different video pictures in the video picture pair. Optionally, the video picture pair includes a first video picture and a second video picture, each of the first video picture and the second video picture being a matrix of kxl pixels, a first pixel in the first video picture being located at (K 1 ,l 1 ) The second pixel point in the second video frame is located at (k 1 ,l 1 ) The first pixel point and the second pixel point form a pixel point pair, wherein k is 1 Belonging to [1, K]Positive integer of l 1 Belonging to [1, L]Is a positive integer of (a).
In one possible implementation, determining the pixel motion duration of each preset direction based on each pixel speed includes: sequentially calculating the included angles between the pixel point speeds in all preset directions and the reference direction; the reference direction is any one direction among the preset directions; when the included angle is in a preset included angle range of the reference direction, determining the pixel point speed as the pixel point speed of the reference direction; and determining the movement duration of the pixel points in each preset direction according to the time stamp of the pixel point speed in the reference direction. Illustratively, in an image coordinate system with the right direction of the x-axis positive, the down direction of the y-axis positive, and the upper left corner as the origin, the standard up/down/left/right optical flow vectors are (0, -1), (0, 1), (-1, 0), (1, 0), respectively. When the upward direction is the reference direction, a certain pixel speed (f x ,f y ) An included angle of 0 with the standard upward optical flow vector (0, -1) indicates perfect anastomosis. To tolerate a smallThe calculation error of the amplitude can be considered to be upward by taking the included angle to be included in the (-10 degree, 10 degree) interval. And the other directions are analogized in sequence until the pixel point movement direction of each preset direction is obtained. The motion direction of the pixel point is the motion direction in each video picture, and is related to the time stamp of the video picture, so that the motion duration of the pixel point is determined step by step.
In an exemplary embodiment, the video frames obtained by the system screen recording are analyzed as video frames in sequence according to the recording sequence of the video frames, and the frames can be sent at a fixed frame rate, such as 30fps, so that a system screen recording screen sending algorithm is acquired every 1/30 second, and the system screen recording screen sending algorithm is recorded as { I ] in sequence from front to back 0 ,I 1 ,I 2 ,...,I N }. Performing optical flow calculation according to the time stamp intervals to obtain optical flow vectors of all pixel points; the pixel light flow directions form a light flow graph. For a light flow graph F_N consisting of a matrix of K x L pixels, the point locations on the graph are (K, L), K belonging to [1, K]Interval, l is [1, L]Interval. The optical flow vector at (k, l) is denoted as F N(k,l) The two-dimensional optical flow vector is the pixel velocity (v x ,v y ) Wherein v is x Representing the velocity of movement in the x-direction, v y Representing the velocity of movement in the y-direction. Wherein the interval of the time stamps is embodied by the interval of the frames. For example, 3 frames apart, then I is calculated at frame 3 3 And I 0 The optical flow between them is denoted as F 3 The method comprises the steps of carrying out a first treatment on the surface of the Calculation of I at frame 4 4 And I 1 The optical flow between them is denoted as F 4 Obtaining an optical flow graph sequence { F } 3 ,F 4 F 5 ,...,F N }。
For pixel (k, l), the recording order of video frames forms a time sequence with adjacent time stamps at 1/30 second intervals or other types of pixel parameter sequences, i.e. { F 3(k,l) ,F 4(k,l) F 5(k,l) ,...,F N(k,l) }。
And step 208, determining the orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence.
The preset movement trend is a movement state obtained by presetting a camera which faces the type of the camera relative to the terminal and rotating according to a preset rotation sequence matched with the camera. Optionally, the preset motion trend includes at least one motion trend of a proactive motion trend and a proactive motion trend; if the picture parameter sequence accords with the proactive movement trend, a camera utilized by the terminal for shooting the video is a front camera; if the picture parameter sequence accords with the post-shooting motion trend, the camera used by the terminal for shooting the video is a post-camera; if the picture parameter sequence does not accord with the proactive movement trend and does not accord with the post-proactive movement trend, determining that the camera is abnormal, and reporting errors.
Optionally, the camera orientation type may be an orientation type for an entity terminal, or an orientation type for a virtual terminal. For example, a virtual terminal in a virtual reality environment has a plurality of camera orientation types, and each camera orientation type has a respective preset movement trend.
In an alternative embodiment, the terminal includes a front camera and a rear camera; according to the preset motion trend which is met by the picture parameter sequence, determining the camera orientation type of the camera relative to the terminal, including: and determining that the camera is a front camera or a rear camera of the terminal according to the preset motion trend which is met by the picture parameter sequence. Thereby judging for the mobile phone or other terminals with front cameras and rear cameras.
In an alternative embodiment, the preset motion trend includes a pre-shooting time length condition of the front camera in each preset direction and a post-shooting time length condition of the rear camera in each preset direction; the picture motion information comprises motion information of each pixel point; the preset direction sequence comprises a preset direction sequence of the front camera and a preset direction sequence of the rear camera.
Correspondingly, according to the preset motion trend which is met by the picture parameter sequence, determining that the camera is a front camera or a rear camera of the terminal comprises the following steps: in a video picture, determining the number of front camera shooting pixels conforming to a front camera shooting time length condition and the number of rear camera shooting pixels conforming to a rear camera shooting movement trend; if the number of front camera pixels is larger than that of rear camera pixels, determining that the camera is a front camera of the terminal; if the number of front camera pixels is smaller than that of rear camera pixels, determining that the camera is a rear camera of the terminal.
Optionally, the frame parameter sequence includes a pixel parameter sequence, where the pixel parameter sequence includes a sequence of pixel directions in which a motion duration of the pixel in each preset direction is aligned with a motion duration of the pixel in each preset direction. The motion duration of the pixel points in each preset direction refers to the time length of the pixel points moving in each preset direction in each video picture; the pixel direction order may be a time stamp order of all or part of the video pictures, which may be a recording order. Judging the respective parameter sequences of the pixel points in the picture from two dimensions, wherein one dimension is the movement time length of the pixel points, namely the movement time length of the pixel points in a preset direction; the other dimension is the pixel direction sequence, i.e. the sequence in which the pixels move in different preset directions. The camera is judged to be a front camera or a rear camera more accurately under the condition that the video picture belongs to different scenes according to the comprehensive judgment of the duration condition and the pixel point direction sequence.
In one embodiment, determining that the camera is a front camera or a rear camera of the terminal according to a preset motion trend according to the picture parameter sequence includes: determining that the motion time length of the pixel points in the preset direction meets the time length condition of each preset direction, and enabling the sequence of the pixel directions to accord with the target pixel points in the sequence of the preset directions; and determining that the camera is a front camera or a rear camera of the terminal according to the target pixel point.
The target pixel points are the pixel points in the video picture, which are used for judging whether the camera is a front camera or a rear camera. The target pixel point comprises at least two dimensions, wherein one dimension is the movement time length of the pixel point, namely the movement time length of the pixel point in a preset direction; the other dimension is the pixel direction sequence, i.e. the sequence in which the pixels move in different preset directions. Alternatively, the target pixel may be an intersection of a pixel satisfying the duration condition and a pixel conforming to the preset direction sequence, or may be a pixel obtained by performing data cleaning or the like on the intersection.
In one possible implementation manner, determining that the motion duration of the pixel points in the preset direction meets the duration condition of each preset direction, and the sequence of the pixel directions meets the target pixel points in the preset direction sequence includes: judging that the motion time length of the pixel points in the preset directions meets the time length conditions of the preset directions, and obtaining the pixel points meeting the time length conditions; judging that the pixel direction sequence accords with the preset direction sequence, and obtaining pixel points which accord with the preset direction sequence; and determining a target pixel point according to the pixel points meeting the duration condition and the pixel points meeting the preset direction sequence.
In one possible implementation manner, determining that the motion duration of the pixel point in the preset direction meets the duration condition of each preset direction includes: sequentially determining a first direction movement time length of the same pixel point in a first preset direction, a second direction movement time length in a second preset direction, a third direction movement time length in a third preset direction and a fourth direction movement time length in a fourth preset direction; judging that the first direction movement time length exceeds the first direction movement time length threshold value, judging that the second direction movement time length exceeds the second direction movement time length threshold value, judging that the third direction movement time length exceeds the third direction movement time length threshold value, and judging that the fourth direction movement time length exceeds the fourth direction movement time length threshold value, and determining the pixel points meeting the time length conditions.
In one possible implementation manner, determining that the pixel direction sequence accords with the preset direction sequence to obtain the pixel point in accordance with the preset direction sequence includes: judging whether the first direction movement time length, the second direction movement time length, the third direction movement time length and the fourth direction movement time length of the pixel points are matched with the preset direction sequence of the rotation of the cradle head, and if so, obtaining the pixel points conforming to the preset direction sequence.
In a feasible implementation manner, the target pixel point belongs to a front camera pixel point and a rear camera pixel point, wherein the front camera pixel point refers to that the movement time length of the pixel point in the preset direction meets the condition of the pre-shooting time length in each preset direction, and the sequence of the pixel directions meets the sequence of the pre-shooting preset directions; the post-shooting pixel points are that the movement time length of the pixel points in the preset directions meets the post-shooting time length condition of each preset direction, and the pixel direction sequence accords with the post-shooting preset direction sequence.
Correspondingly, according to the target pixel point, determining that the camera is a front camera or a rear camera of the terminal comprises: and determining that the cameras are front cameras and rear cameras according to the number statistical results of the front camera pixels and the rear camera pixels.
The target pixel points are obtained by comprehensively judging the time length conditions and the pixel point direction sequence, so that the camera can be more accurately judged to be a front camera or a rear camera under the condition that the video picture belongs to different scenes.
Optionally, the preset movement trend includes: at least one of a proactive movement trend and a proactive movement trend is preset, and the proactive movement trend are in reverse order.
The proactive motion trend includes, during a process of pre-capturing video frames by the front-facing camera, a preset proactive frame parameter sequence when the video frames are pre-captured in sequence according to a proactive preset direction sequence based on the front-facing camera. Correspondingly, the post-shooting motion trend comprises a preset post-shooting picture parameter sequence when the video pictures are sequentially and pre-acquired according to a post-shooting preset direction sequence based on the post-shooting camera in the process of pre-acquiring the video pictures through the post-shooting camera.
The pre-shooting preset direction sequence and the post-shooting preset direction sequence are in opposite sequences, so that in the execution process of the judging method of the camera, the camera is determined to be a front camera or a rear camera of the terminal according to the picture parameter sequences conforming to the pre-shooting movement trend and the post-shooting movement trend respectively; the picture parameter sequence conforming to the proactive movement trend and the post-proactive movement trend comprises a pixel parameter sequence conforming to the proactive movement trend and a pixel parameter sequence conforming to the post-proactive movement trend.
In one embodiment, determining that the camera is a front camera or a rear camera of the terminal according to a preset motion trend according to the picture parameter sequence includes: counting the number of front shooting pixel points which accord with the front shooting motion trend and rear shooting pixel points which accord with the rear shooting motion trend in the pixel points of the video picture to obtain the number of front shooting pixel points and the number of rear shooting pixel points; if the number of front camera pixels is larger than that of rear camera pixels, determining that the camera is a front camera of the terminal; if the number of front camera pixels is smaller than that of rear camera pixels, determining that the camera is a rear camera of the terminal.
Alternatively, the front image capturing pixel point and the rear image capturing pixel point may be obtained by classifying the target pixel point, and may also be obtained by classifying other pixel points for determining a preset motion trend. The other pixels used for judging the preset motion trend may be, for example, counting the motion directions of the pixels of each video frame to obtain the number of pixels with the motion directions of the pixels conforming to the preset direction; and comprehensively judging the number of the pixels and the sequence of the pixel directions based on the pixel movement direction conforming to the preset direction so as to determine the front camera pixel point and the rear camera pixel point.
Therefore, the front camera pixel point and the rear camera pixel point are respectively determined according to different conditions which are met by the preset motion trend, and the camera can be accurately judged to be a front camera or a rear camera under the condition that the video picture belongs to different scenes.
For example, if the first preset direction is the upper direction, the second preset direction is the lower direction, the third preset direction is the left direction, and the fourth preset direction is the right direction, then the following two cases are considered as pixel parameter sequences according with the preset motion trend in the first preset rotation sequence:
700ms up + 700ms down + 700ms left + 700ms right = conforming to the post-shot motion trend;
700ms down + 700ms up + 700ms right + 700ms left = conforming to the proactive picture motion trend.
700ms is the time of setting the continuous motion of the mobile phone/lens in one direction, but there is a process of starting and stopping when the mobile phone/lens actually moves, the time of the continuous motion actually and being detected by the optical flow method may be less than 700ms, so that a smaller value can be taken to judge the preset motion trend which accords with the pixel point, so as to better tolerate the control error and the calculation error, and the following criterion can be used to judge the parameter sequence of the pixel point which accords with the preset motion trend:
400ms up + 400ms down + 400ms left + 400ms right = conforming to the motion trend of the post-shot picture;
400ms downward+400 ms upward+400 ms rightward+400 ms leftward=consistent with the proactive picture motion trend.
In the judging method of the camera, the terminal is driven to rotate in different preset directions sequentially through the cradle head, so that the video shot by other application programs by the camera is changed, and the video pictures of the terminal in the rotation process in different preset directions are obtained in a screen recording mode because the other application programs occupy the camera; and determining the picture motion information according to the characteristic change information of the video picture, thereby accurately determining that the camera is a front camera or a rear camera of the terminal according to the preset motion trend which is consistent with the picture motion information.
In one embodiment, before the terminal is driven to rotate in different preset directions by the cradle head, the method includes: and if the video picture is in the standby mode of target object recording, displaying the prompt information of the video picture in the standby mode until the video picture recording mode is determined to be the target object recording mode when responding to the judging instruction of the camera.
When a video picture is in a standby mode of recording a target object, the prompt information of the video picture in the standby mode is displayed through man-machine interaction modes such as a flexible island, a floating window and a push notification, so that an object tracking instruction sent by a user is reminded and waited, wherein the object tracking instruction can be a judging instruction of a camera or a judging instruction for triggering the camera.
When the video picture is determined to be the target object recording mode in response to the judging instruction of the camera, the camera is judged to be one of a front camera and a rear camera of the terminal by executing the steps 202-208, and the judging process of the camera in the target object recording mode is completed, wherein the video picture in the target object recording mode is determined according to the picture acquisition parameters of the camera.
In an optional implementation manner, if the video frame is in a standby mode for recording a target object, displaying a prompt message that the video frame is in the standby mode until a judgment instruction of a camera is responded, and determining the recording mode of the video frame as the target object recording mode includes: if the instruction triggered by the cradle head trigger is not detected, displaying prompt information of the terminal in a tracking standby mode of a target object until a picture recording instruction or other instructions for representing judgment of a camera are responded, and determining a video picture recording mode as the target object recording mode; the initialization process of the target object recording mode includes the steps 202-208.
In an optional implementation manner, if the video frame is in a standby mode for recording a target object, displaying a prompt message that the video frame is in the standby mode until a judgment instruction of a camera is responded, and determining the recording mode of the video frame as the recording mode of the target object includes: if the cloud platform triggering object tracking instruction is not detected, prompting that the terminal is not in a target object tracking mode. If the cloud deck triggering object tracking instruction is detected, initializing linkage of the terminal and the cloud deck is carried out through an application program, and whether a video picture and a camera for shooting video are a front camera or a rear camera is determined.
Therefore, the recording mode is displayed through the interface of the terminal, so that a user intuitively determines the shooting state of the video picture; and judging whether to perform the video picture recording initialization process in the recording mode.
In one embodiment, after determining that the camera is a front camera or a rear camera of the terminal, the method further includes: determining the position of a target object in a video picture; determining picture acquisition parameters of the camera according to the position of the target object; the terminal is driven to rotate according to the picture acquisition parameters by the cradle head; recording video pictures shot by the terminal by using the camera in the process of rotating according to the picture acquisition parameters; the target object is at a preset picture position in a video picture recorded according to picture acquisition parameters.
The video picture is positioned in a picture recording area in the screen picture through picture acquisition parameters, and the position of the target object is determined based on the picture recording area; in the process of determining target object tracking, in the area for target tracking, the method is used for determining how the target object tracking process and the cradle head control signal are calculated.
The picture acquisition parameters are used for representing the distribution information of the target object in the video picture, and different distribution information can be obtained by calculating the significant area and the associated area of the target object in area, coordinates or other data so as to adjust the rotation parameters of the cradle head in the shooting process. The salient region of the target object is a region identified by target detection, and the associated region is associated with the salient region according to the object structure of target detection; illustratively, the associated region of the target object is a human head region and the associated region of the target object is a human body region.
Optionally, if the rotation parameters of the pan-tilt in the shooting process are adjusted, the frame acquisition parameters are frame acquisition offset, and the frame acquisition offset at least includes adjustment of the acquisition direction and may also include adjustment of the focal length. The picture acquisition offset is determined according to whether the salient region of the video image is located at a preset picture position. When the position difference between the salient region of the video image and the target position is smaller than the position threshold, the picture acquisition offset of the video image can be considered smaller, and the position of the terminal does not need to be adjusted by the control holder, or smaller adjustment is performed; when the position difference between the salient region of the video picture and the target position is larger than the position threshold value, the picture acquisition offset is larger, and the target object of the video picture is adjusted. Alternatively, when the target object of the video frame is adjusted, the identifiable object type of the target object may be changed, or the target object may be changed. The target object is a tracking target, which may be a type of identifiable object preset by the system, including but not limited to, a person's head, a person's face, a cat, a dog, a vehicle.
And determining picture acquisition parameters through the position of the target object in at least one frame of video picture, and controlling the cradle head to drive the terminal to rotate according to the position of the target object through the picture acquisition parameters so that the preset picture position in the video picture recorded according to the picture acquisition parameters is matched with the position of the target object to shoot the target object. Optionally, the preset screen position is a center position of the video screen, or a position selected by the user.
In one embodiment, after determining that the camera is a front camera or a rear camera of the terminal, the method further includes: when a cradle head is carried with a mobile phone, a camera or other shooting terminals to shoot, if a camera of the terminal is occupied by video shooting software, a video picture shot by the camera is acquired through a background of the terminal, then whether a target object exists in the acquired video picture is judged, if the target object exists, picture acquisition parameters are determined according to the position of the target object, so that the target object is tracked, the interaction between front-end shooting and rear-end tracking is realized, and the shooting experience of a user is improved.
In an exemplary embodiment, an instruction sent by a key on the handheld cradle head triggered is detected, and the cradle head informs an application program in the terminal through Bluetooth or other communication connection modes; initialization of tracking is then entered. In the initialization process, firstly, the area where the video picture is located is determined through the camera judging process and the video picture judging process, and the camera is judged to be a front camera or a rear camera. And then, carrying out target detection on the video picture, detecting objects of a preset type in the video picture, calculating the distance between the center point of the rectangular frame and the center of the preview area one by one, and selecting a candidate object with the minimum distance for tracking. Common detection methods can be used, and can be detection methods based on manual characteristics (such as a template matching method, a key point matching method, a key characteristic method and the like) or detection methods based on convolutional neural network technology (such as YOLO, SSD, R-CNN, mask R-CNN and the like).
In an exemplary embodiment, the application is applied to a specific application scenario, as shown in fig. 5, which includes: detecting an application program starting instruction of a terminal; the application program is connected with the terminal and the cradle head; the shooting page of the terminal is switched to a screen recording tracking mode; entering a standby state of screen recording tracking, and popping up a prompt window to switch an application program to a background; waiting for an object tracking instruction through other man-machine interaction modes such as a flexible island, a floating window, a push notification and the like; if the instruction triggered by the cradle head trigger is not detected, prompting that the terminal is not in a tracking mode of the target object; if an instruction triggered by clicking a cradle head trigger for the first time is detected, initiating an initialization flow; initializing linkage refers to determining a video picture and determining whether a camera for shooting video is a front camera or a rear camera; after initialization, determining a target object and tracking the target object; displaying the target object into a video picture; displaying a mode of the terminal in tracking and recovering the target object through a human-computer interaction interface such as a suspension frame; if the tracking of the target object is kept for 1 hour, re-executing the step of not detecting the command triggered by the cradle head trigger; if the above-mentioned "detect the first time and click the instruction triggered by the cradle head trigger, initiate the initialization flow" event exists before keeping the tracking of the target object, then cancel the tracking when clicking the instruction triggered by the cradle head trigger again, and re-execute the above-mentioned "do not detect the instruction triggered by the cradle head trigger, then prompt the said terminal to be not in the step of the tracking mode of the target object".
In one exemplary embodiment, the application is applied to a specific application scenario, which includes: detecting an application program starting instruction of the terminal, and connecting the terminal with the cradle head by the application program; the shooting page of the terminal is switched to a screen recording tracking mode, then enters a standby state of screen recording tracking, and pops up a prompt window to switch an application program to the background; waiting for an object tracking instruction through real-time message activities (Live activities) such as a Dynamic Island (Dynamic Island), a floating window, push notification or other man-machine interaction modes; if the cloud deck triggering object tracking instruction is not detected, prompting that the terminal is not in a target object tracking mode; if a cloud deck triggering object tracking instruction is detected, initializing linkage of a terminal and the cloud deck is carried out through an application program, and whether a video picture and a camera for shooting video are a front camera or a rear camera is determined; after the video picture and the camera are judged, the initialization is completed, so that the target object tracking is started. When the target object is tracked, displaying the target object into a video picture, displaying a mode that the terminal is in target object tracking recovery through a human-computer interaction interface such as a suspension frame and the like, and keeping tracking of the target object; and in the process of keeping the tracking of the target object, if the cradle head is detected to trigger an object tracking termination instruction or the tracking process lasts for 1 hour, waiting for the object tracking instruction again.
The cradle head triggering object tracking instruction is triggered by a user clicking a tracking start button in the cradle head, and the object tracking termination instruction can also be triggered by a user clicking a tracking termination button in the cradle head; the tracking start button and the tracking end button may be the same button, and the button functions as a tracking start button while waiting for an object tracking instruction, and functions as a tracking end button while maintaining tracking of the target object.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a camera judging device for realizing the above-mentioned camera judging method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the determining device for one or more cameras provided below may refer to the limitation of the determining method for a camera described above, and will not be repeated here.
In one embodiment, as shown in fig. 6, there is provided a device for judging a camera, including:
the terminal motion module 602 is configured to record a video shot by the terminal using a camera in a process of driving the terminal to rotate in different preset directions sequentially through the pan-tilt, so as to obtain a video picture;
an information acquisition module 604, configured to determine picture motion information according to the feature change information of the video picture;
a parameter generating module 606, configured to generate a picture parameter sequence of the terminal when rotating according to the picture motion information;
and the camera determining module 608 is configured to determine a camera orientation type of the camera relative to the terminal according to a preset motion trend that is met by the picture parameter sequence.
In one embodiment, the preset directions include a first preset direction, a second preset direction, a third preset direction and a fourth preset direction.
In one embodiment, the terminal motion module 602 is configured to:
according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by preset angles towards the first preset direction, the second preset direction, the third preset direction and the fourth preset direction respectively;
recording a video picture displayed on the terminal in the process that the cloud deck drives the terminal to rotate; the video picture is a picture of the video which is shot by the terminal by using a camera.
In one embodiment, the picture motion information includes: the cloud platform drives the terminal to rotate towards the first preset direction, the second preset direction, the third preset direction and the third preset direction respectively for a movement duration in the process of rotating the preset angle.
In one embodiment, the first preset direction and the second preset direction are opposite directions, and the third preset direction and the fourth preset direction are opposite directions.
In one embodiment, the terminal motion module 602 is configured to:
the terminal is driven by the cradle head to rotate towards a first preset direction according to a first preset angle;
the terminal is driven by the cradle head to rotate towards the second preset direction according to the first preset angle;
the terminal is driven by the cradle head to rotate towards the third preset direction according to a second preset angle;
and driving the terminal to rotate towards the fourth preset direction according to the second preset angle through the cradle head.
In one embodiment, the first preset angle is within a longitudinal rotation range of the pan-tilt, the second preset angle is within a transverse rotation range of the pan-tilt, and the first preset angle is smaller than or equal to the second preset angle; or the first preset angle is in the longitudinal rotation range of the terminal, the second preset angle is in the transverse rotation range of the terminal, and the first preset angle is smaller than or equal to the second preset angle.
In one embodiment, the first preset angle is greater than or equal to 20 degrees, and the first preset angle is less than or equal to 30 degrees; the second preset angle is greater than or equal to 30 degrees, and the first preset angle is less than or equal to 40 degrees.
In one embodiment, the terminal motion module 602 is configured to:
the terminal is driven by the cradle head to rotate towards a first preset direction according to a third preset angle;
the terminal is driven by the cradle head to rotate towards the third preset direction according to the third preset angle;
the terminal is driven by the cradle head to rotate towards the second preset direction according to the third preset angle;
and controlling the cradle head to drive the terminal to rotate towards the fourth preset direction according to the third preset angle.
In one embodiment, the information collecting module 604 is configured to:
combining video pictures recorded by different time stamps to obtain each video picture pair;
and in each video picture pair, carrying out optical flow calculation on the pixel points to obtain picture motion information.
In one embodiment, a pixel for optical flow calculation includes: each pixel point in each video picture pair, or each pixel point in each video picture pair downsampled graph.
In one embodiment, the information collecting module 604 is configured to:
respectively determining the pixel point speeds in each video picture pair; the pixel speed comprises the pixel motion direction of each video picture;
Correspondingly, the parameter generating module 606 is configured to determine a pixel movement duration in each preset direction based on the pixel movement direction of each video frame and the timestamp;
and combining the pixel point movement time lengths in the preset directions according to the recording time sequence of the video picture to obtain a pixel point parameter sequence when the terminal rotates.
In one embodiment, the terminal comprises a front camera and a rear camera; the camera determining module 608 is configured to: and determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence.
In one embodiment, the picture parameter sequence includes a pixel parameter sequence, where the pixel parameter sequence includes a sequence of pixel directions in which a motion duration of a pixel in each of the preset directions is aligned with a motion duration of a pixel in each of the preset directions.
In one embodiment, the camera determination module 608 is configured to:
determining that the motion duration of the pixel points in the preset direction meets the duration condition of each preset direction, and the sequence of the pixel directions accords with the target pixel points in the sequence of the preset directions;
And determining that the camera is a front camera or a rear camera of the terminal according to the target pixel point.
In one embodiment, the preset motion trend includes: at least one of a proactive movement trend and a proactive movement trend is preset, and the proactive movement trend are in reverse order.
In one embodiment, the camera determination module 608 is configured to:
counting the number of front shooting pixel points which accord with the front shooting motion trend and rear shooting pixel points which accord with the rear shooting motion trend in the pixel points of the video picture to obtain the number of front shooting pixel points and the number of rear shooting pixel points;
if the number of the front camera pixels is larger than the number of the rear camera pixels, determining that the camera is a front camera of the terminal;
and if the number of the front camera pixels is smaller than that of the rear camera pixels, determining that the camera is a rear camera of the terminal.
In one embodiment, the terminal motion module 602 is configured to:
in the process of driving a terminal to rotate a preset angle towards different preset directions through a cradle head, acquiring each frame of screen picture of the terminal in the motion process;
Detecting pixel difference information of the screen picture of each frame;
determining a variation value of the pixel difference information;
determining picture recording areas in the screen pictures of each frame according to the change values;
and recording the picture recording area to obtain a video picture.
In one embodiment, the terminal motion module 602 is configured to:
and if the video picture is in the standby mode of target object recording, displaying the prompt information of the video picture in the standby mode until the video picture recording mode is determined to be the target object recording mode when responding to the judging instruction of the camera.
In one embodiment, after the determining that the camera is a front camera or a rear camera of the terminal, the information collecting module 604 is configured to:
determining the position of a target object in the video picture;
determining picture acquisition parameters of the camera according to the position of the target object;
the terminal is driven to rotate according to the picture acquisition parameters by the cradle head;
recording a video picture shot by the terminal by using a camera in the process of rotating according to the picture acquisition parameters; and the target object is at a preset picture position in a video picture recorded according to picture acquisition parameters.
All or part of the modules in the judging device of the camera can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided a handheld cradle head, including a processor, which when executing a computer program, implements the steps of the method embodiments described above.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (24)

1. The method for judging the camera is characterized by comprising the following steps:
in the process of driving a terminal to rotate in different preset directions sequentially through a cradle head, recording videos shot by the terminal by using a camera to obtain video pictures;
determining picture motion information according to the characteristic change information of the video picture;
generating a picture parameter sequence of the terminal when the terminal rotates according to the picture motion information;
And determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence.
2. The method of claim 1, wherein the predetermined direction comprises a first predetermined direction, a second predetermined direction, a third predetermined direction, and a fourth predetermined direction.
3. The method of claim 2, wherein recording the video shot by the terminal using the camera in the process of driving the terminal to rotate in different preset directions sequentially by the cradle head, comprises:
according to a preset rotation sequence of the cradle head, the cradle head drives the terminal to rotate by preset angles towards the first preset direction, the second preset direction, the third preset direction and the fourth preset direction respectively;
recording a video picture displayed on the terminal in the process that the cloud deck drives the terminal to rotate; the video picture is a picture of the video which is shot by the terminal by using a camera.
4. A method according to claim 3, wherein the picture motion information comprises: the cloud platform drives the terminal to rotate towards the first preset direction, the second preset direction, the third preset direction and the third preset direction respectively for a movement duration in the process of rotating the preset angle.
5. A method according to claim 3, wherein the first and second preset directions are opposite directions, and the third and fourth preset directions are opposite directions.
6. The method of claim 5, wherein driving the terminal to rotate by a preset angle in the first preset direction, the second preset direction, the third preset direction, and the fourth preset direction by the pan-tilt according to the preset rotation sequence of the pan-tilt comprises:
the terminal is driven by the cradle head to rotate towards a first preset direction according to a first preset angle;
the terminal is driven by the cradle head to rotate towards the second preset direction according to the first preset angle;
the terminal is driven by the cradle head to rotate towards the third preset direction according to a second preset angle;
and driving the terminal to rotate towards the fourth preset direction according to the second preset angle through the cradle head.
7. The method of claim 6, wherein the first preset angle is within a range of pan-tilt longitudinal rotation and the second preset angle is within a range of pan-tilt lateral rotation, the first preset angle being less than or equal to the second preset angle; or the first preset angle is in the longitudinal rotation range of the terminal, the second preset angle is in the transverse rotation range of the terminal, and the first preset angle is smaller than or equal to the second preset angle.
8. The method of claim 6, wherein the first preset angle is greater than or equal to 20 degrees and the first preset angle is less than or equal to 30 degrees; the second preset angle is greater than or equal to 30 degrees, and the first preset angle is less than or equal to 40 degrees.
9. The method of claim 5, wherein driving the terminal to rotate by a preset angle in the first preset direction, the second preset direction, the third preset direction, and the fourth preset direction by the pan-tilt according to the preset rotation sequence of the pan-tilt comprises:
the terminal is driven by the cradle head to rotate towards a first preset direction according to a third preset angle;
the terminal is driven by the cradle head to rotate towards the third preset direction according to the third preset angle;
the terminal is driven by the cradle head to rotate towards the second preset direction according to the third preset angle;
and controlling the cradle head to drive the terminal to rotate towards the fourth preset direction according to the third preset angle.
10. The method of claim 1, wherein determining picture motion information based on feature change information of the video picture comprises:
Combining video pictures recorded by different time stamps to obtain each video picture pair;
and in each video picture pair, carrying out optical flow calculation on the pixel points to obtain picture motion information.
11. The method of claim 10, wherein the pixel points for optical flow calculation comprise: each pixel point in each video picture pair, or each pixel point in each video picture pair downsampled graph.
12. The method of claim 11, wherein said performing optical flow calculations on pixel points in each of said video frame pairs to obtain frame motion information comprises:
respectively determining the pixel point speeds in each video picture pair; the pixel speed comprises the pixel motion direction of each video picture;
the generating a picture parameter sequence of the terminal when rotating according to the picture motion information comprises the following steps:
determining the pixel movement duration of each preset direction based on the pixel movement direction of each video picture and the time stamp;
and combining the pixel point movement time lengths in the preset directions according to the recording time sequence of the video picture to obtain a pixel point parameter sequence when the terminal rotates.
13. The method of claim 1, wherein the terminal comprises a front camera and a rear camera; the determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence comprises the following steps: and determining that the camera is a front camera or a rear camera of the terminal according to the preset motion trend which is met by the picture parameter sequence.
14. The method of claim 13, wherein the frame parameter sequence includes a pixel parameter sequence including a sequence of pixel directions in which a pixel movement duration of each of the preset directions is aligned with a pixel movement duration of each of the preset directions.
15. The method according to claim 14, wherein the determining that the camera is a front camera or a rear camera of the terminal according to the preset motion trend that the picture parameter sequence conforms to, includes:
determining that the motion duration of the pixel points in the preset direction meets the duration condition of each preset direction, and the sequence of the pixel directions accords with the target pixel points in the sequence of the preset directions;
And determining that the camera is a front camera or a rear camera of the terminal according to the target pixel point.
16. The method of claim 13, wherein the predetermined movement trend comprises: at least one of a proactive movement trend and a proactive movement trend is preset, and the proactive movement trend are in reverse order.
17. The method according to claim 16, wherein the determining that the camera is a front camera or a rear camera of the terminal according to the preset motion trend that the picture parameter sequence conforms to comprises:
counting the number of front shooting pixel points which accord with the front shooting motion trend and rear shooting pixel points which accord with the rear shooting motion trend in the pixel points of the video picture to obtain the number of front shooting pixel points and the number of rear shooting pixel points;
if the number of the front camera pixels is larger than the number of the rear camera pixels, determining that the camera is a front camera of the terminal;
and if the number of the front camera pixels is smaller than that of the rear camera pixels, determining that the camera is a rear camera of the terminal.
18. The method of claim 1, wherein the recording the video shot by the terminal using the camera to obtain a video frame includes:
in the process of driving a terminal to rotate a preset angle towards different preset directions through a cradle head, acquiring each frame of screen picture of the terminal in the motion process;
detecting pixel difference information of the screen picture of each frame;
determining a variation value of the pixel difference information;
determining picture recording areas in the screen pictures of each frame according to the change values;
and recording the picture recording area to obtain a video picture.
19. The method of claim 1, wherein before the terminal is driven to rotate in different preset directions by the cradle head, the method comprises:
and if the video picture is in the standby mode of target object recording, displaying the prompt information of the video picture in the standby mode until the video picture recording mode is determined to be the target object recording mode when responding to the judging instruction of the camera.
20. The method according to any one of claims 1-19, wherein after the determining that the camera is a front camera or a rear camera of the terminal, the method further comprises:
Determining the position of a target object in the video picture;
determining picture acquisition parameters of the camera according to the position of the target object;
the terminal is driven to rotate according to the picture acquisition parameters by the cradle head;
recording a video picture shot by the terminal by using a camera in the process of rotating according to the picture acquisition parameters; and the target object is at a preset picture position in a video picture recorded according to picture acquisition parameters.
21. A device for determining a camera, the device comprising:
the terminal motion module is used for recording videos shot by the terminal through the camera in the process of driving the terminal to rotate in different preset directions through the cradle head in sequence, so as to obtain video pictures;
the information acquisition module is used for determining picture motion information according to the characteristic change information of the video picture;
the parameter generation module is used for generating a picture parameter sequence of the terminal when the terminal rotates according to the picture motion information;
and the camera determining module is used for determining the camera orientation type of the camera relative to the terminal according to the preset motion trend which is met by the picture parameter sequence.
22. A handheld cradle head comprising a processor for implementing the steps of the method of any one of claims 1 to 20.
23. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 20 when the computer program is executed.
24. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 20.
CN202310253149.XA 2023-03-06 2023-03-06 Camera judging method, device, computer equipment and storage medium Pending CN116405656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310253149.XA CN116405656A (en) 2023-03-06 2023-03-06 Camera judging method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310253149.XA CN116405656A (en) 2023-03-06 2023-03-06 Camera judging method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116405656A true CN116405656A (en) 2023-07-07

Family

ID=87006594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310253149.XA Pending CN116405656A (en) 2023-03-06 2023-03-06 Camera judging method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116405656A (en)

Similar Documents

Publication Publication Date Title
CN109313799B (en) Image processing method and apparatus
US9300876B2 (en) Fill with camera ink
CN107341827B (en) Video processing method, device and storage medium
KR20170031733A (en) Technologies for adjusting a perspective of a captured image for display
CN113973190A (en) Video virtual background image processing method and device and computer equipment
US20210051273A1 (en) Photographing control method, device, apparatus and storage medium
US20220345628A1 (en) Method for image processing, electronic device, and storage medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN114125179B (en) Shooting method and device
CN113516743A (en) Hair rendering method and device, electronic equipment and storage medium
CN112954212B (en) Video generation method, device and equipment
CN113906731B (en) Video processing method and device
CN115022549B (en) Shooting composition method, shooting composition device, computer equipment and storage medium
CN114095780A (en) Panoramic video editing method, device, storage medium and equipment
Chew et al. Panorama stitching using overlap area weighted image plane projection and dynamic programming for visual localization
CN116405656A (en) Camera judging method, device, computer equipment and storage medium
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
CN114785957A (en) Shooting method and device thereof
WO2022040988A1 (en) Image processing method and apparatus, and movable platform
CN114466140A (en) Image shooting method and device
CN116405768A (en) Video picture area determining method, apparatus, computer device and storage medium
CN114049473A (en) Image processing method and device
US20200134305A1 (en) Method, apparatus, and device for identifying human body and computer readable storage medium
CN115550551A (en) Automatic focusing method and device for shooting equipment, electronic equipment and storage medium
CN113706553B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination