CN113014817A - Method and device for acquiring high-definition high-frame video and electronic equipment - Google Patents

Method and device for acquiring high-definition high-frame video and electronic equipment Download PDF

Info

Publication number
CN113014817A
CN113014817A CN202110241209.7A CN202110241209A CN113014817A CN 113014817 A CN113014817 A CN 113014817A CN 202110241209 A CN202110241209 A CN 202110241209A CN 113014817 A CN113014817 A CN 113014817A
Authority
CN
China
Prior art keywords
video
frame
target
entity object
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110241209.7A
Other languages
Chinese (zh)
Other versions
CN113014817B (en
Inventor
林挺裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110241209.7A priority Critical patent/CN113014817B/en
Publication of CN113014817A publication Critical patent/CN113014817A/en
Application granted granted Critical
Publication of CN113014817B publication Critical patent/CN113014817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a device for acquiring a high-definition high-frame video and electronic equipment, and belongs to the technical field of video processing. The method comprises the following steps: acquiring a first video shot by a first camera module and a second video shot by a second camera module; determining a target entity object in the first video, and determining the relative position information of the target entity object in the video frame according to the second video; according to the method, a frame to be filled is synthesized according to a target entity object and relative position information to obtain a high-definition high-frame video.

Description

Method and device for acquiring high-definition high-frame video and electronic equipment
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a method and a device for acquiring a high-definition high-frame video and an electronic device.
Background
With the technical breakthrough of the camera integrated in the mobile phone, the use demand of the user on the camera in the mobile phone is higher and higher, so that the prior art tends to integrate a high-performance camera, such as a high-pixel high-frame-rate camera, in the mobile phone, so as to obtain a high-definition high-frame video frame, and finally obtain a high-definition high-frame video.
However, in the prior art, due to the large workload and the large number of output images of the high-performance camera, the camera is heated seriously, and the large heat is not easy to dissipate, which not only affects the overall performance of the mobile phone, but also causes the image quality of the video frame acquired by the camera to be poor.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for acquiring a high-definition high-frame video and electronic equipment, and the problem that a camera head is hot seriously when the high-definition high-frame video is acquired in the prior art can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for acquiring a high-definition high-frame video, which is applied to an electronic device, where the electronic device has a first camera module and a second camera module, and the method includes:
acquiring a first video shot by the first camera module and a second video shot by the second camera module, wherein the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video;
determining a target entity object in the first video, and determining the relative position information of the target entity object in a video frame according to the second video;
and synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
In a second aspect, an embodiment of the present application further provides a method for acquiring a high-definition high-frame video, which is applied to an electronic device, where the electronic device has a target camera module and a plurality of other camera modules, and the method includes:
acquiring a target video shot by the target camera module and a plurality of other videos shot by the other camera modules, wherein the frame rate of the target video is less than that of the other videos, and the resolution of the target video is greater than that of the other videos;
determining a target entity object in the target video, and determining the relative position information of the target entity object in a video frame according to the other videos;
and synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the target video by using the frame to be filled to obtain the high-definition high-frame video.
In a third aspect, an embodiment of the present application provides an apparatus for acquiring a high-definition high-frame video, which is applied to an electronic device, where the electronic device has a first camera module and a second camera module, and the apparatus includes:
a first obtaining module, configured to obtain a first video captured by the first camera module and a second video captured by the second camera module, where a frame rate of the first video is smaller than a frame rate of the second video, and a resolution of the first video is greater than a resolution of the second video;
the first determining module is used for determining a target entity object in the first video and determining the relative position information of the target entity object in a video frame according to the second video;
and the first synthesis module is used for synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
In a fourth aspect, an embodiment of the present application further provides an apparatus for acquiring a high-definition high-frame video, which is applied to an electronic device, where the electronic device has a target camera module and a plurality of other camera modules, and the apparatus includes:
a second obtaining module, configured to obtain a target video captured by the target camera module and a plurality of other videos captured by the other camera modules, where a frame rate of the target video is smaller than frame rates of the other videos, and a resolution of the target video is greater than resolutions of the other videos;
a third determining module, configured to determine a target entity object in the target video, and determine, according to the other videos, relative position information of the target entity object in a video frame;
and the second synthesis module is used for synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the target video by using the frame to be filled to obtain the high-definition high-frame video.
In a fifth aspect, embodiments of the present application further provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a sixth aspect, the present application further provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the steps of the method according to the first aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first video shot by a first camera module and a second video shot by a second camera module are obtained, the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video; determining a target entity object in the first video, and determining the relative position information of the target entity object in the video frame according to the second video; according to the target entity object and the relative position information, synthesizing a frame to be filled, and utilizing the frame to be filled to perform frame supplementing processing on the first video to obtain a high-definition high-frame video, in the application, two camera modules are adopted to respectively obtain a first video of a high-definition low frame and a second video of a low-definition high frame, the first video of the high-definition low frame is used to obtain a target entity object with higher resolution, the second video of the low-definition high frame is used to obtain the position information of the target entity object in the frame to be filled, so as to synthesize the frame to be filled, the frame to be filled is utilized to perform frame supplementing processing on the first video, and finally the high-definition high-frame video is obtained, so that one camera module with high pixel and high frame rate is avoided being used to obtain the high-definition high frame video, thereby reducing the workload or the number of output images of a single camera module, reducing the heat productivity of the camera modules, and reducing the influence of the heat productivity, the image quality of the finally obtained video is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for acquiring a high-definition high-frame video according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a hardware configuration for acquiring high-definition high-frame video according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a process for synthesizing a frame to be padded according to an embodiment of the present application;
fig. 4 is a flowchart illustrating specific steps of a method for acquiring a high-definition high-frame video according to an embodiment of the present application;
fig. 5 is a schematic diagram of another process for synthesizing a frame to be padded according to an embodiment of the present application;
fig. 6 is a schematic diagram of a process for determining average location information according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a position calibration process provided by an embodiment of the present application;
FIG. 8 is a flowchart illustrating steps of another method for acquiring high-definition high-frame video according to an embodiment of the present application;
fig. 9 is a schematic diagram of another process for synthesizing a frame to be padded according to an embodiment of the present application;
fig. 10 is a schematic diagram of a blocking process of a video frame according to an embodiment of the present application;
fig. 11 is a block diagram of an apparatus for acquiring a high-definition high-frame video according to an embodiment of the present application;
fig. 12 is a block diagram of another apparatus for acquiring high-definition high-frame video according to an embodiment of the present application;
fig. 13 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes, in detail, the acquisition of high-definition high-frame video provided by the embodiments of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of steps of a method for acquiring a high-definition high-frame video according to an embodiment of the present application, and as shown in fig. 1, the method may include:
step 101, acquiring a first video shot by the first camera module and a second video shot by the second camera module, wherein the frame rate of the first video is smaller than that of the second video, and the resolution of the first video is greater than that of the second video.
In this step, before the high-definition high-frame video is obtained by synthesis, a high-definition low-frame video and a low-definition high-frame video obtained by recording videos for the same segment of picture need to be obtained respectively.
Specifically, a first video may be obtained by shooting with the first camera module, and a second video may be obtained by shooting with the second camera module, so that the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video, that is, the first video is a high definition low frame video, and the second video is a low definition high frame video.
Fig. 2 is a schematic diagram of a hardware composition for acquiring a high-definition high-frame video according to an embodiment of the present disclosure, as shown in fig. 2, the hardware for acquiring a high-definition high-frame video may be an electronic device, the hardware composition may include a first camera module 10 and a second camera module 20, the first camera module 10 may be a camera device capable of acquiring a high-definition low-frame video and the second camera module 20 may be a camera device capable of acquiring a low-definition high-frame video and provided in the electronic device. In addition, the first camera module 10 and the second camera module 20 may also be camera devices that are arranged in the electronic device and can acquire high-definition low-frame videos or low-definition high-frame videos, that is, the first camera module 10 may shoot a high-definition low-frame video when working in one mode, or shoot a low-definition high-frame video when working in another mode; the second camera module 20 can be used in one mode to capture high-definition low-frame video, or in another mode to capture high-definition high-frame video.
For example, the first video may have a frame rate of 30 frames/second, i.e., 30 video frames per second, and a resolution of 4K, i.e., 3840 × 2160 pixels per inch in each video frame; the second video may be at a frame rate of 60 frames/second, i.e. each second comprises 60 frames of video, and a resolution of 1080p, i.e. each frame of video comprises 1280 × 720 pixels per inch. Therefore, compared with the case that the first video is a high-definition low-frame video, namely, the video has higher definition but lower fluency, and the second video is a low-definition high-frame video, namely, the video has lower definition but higher fluency, after the frame supplementing processing is performed according to the first video and the second video, the high-definition high-frame video with the frame rate of 60 frames/second and the resolution of 4K can be obtained.
After the Image pickup module acquires the video, the Image Processing module (ISP) may further perform basic Processing of Image data on the video frames in the video, and further perform resolution enhancement or frame rate enhancement on the processed video frames.
The Process of the ISP performing basic processing on the video frame may include performing Automatic Focus (AF), Automatic Exposure (AE), and Automatic White Balance (AWB) processing on the video frame Image through an Image Front-end (IFE) and an Image Processing Engine (IPE).
And 102, determining a target entity object in the first video, and determining the relative position information of the target entity object in a video frame according to the second video.
In this step, a target entity object may be determined from the first video. The entity object may be an entity which is located in the foreground of the shot picture and changes in relative position in the shot picture with time, and since the change of the background part except the entity object in the shot picture with time is small, the largest difference between adjacent video frames (the video frame with the smallest time interval) in the video is the change of the position of the entity object.
For example, if a shot picture is a process of a person walking on a road, the person is an entity object in the foreground in the shot video, and the road and other environmental features are backgrounds. Because the time interval between adjacent video frames (the video frame with the minimum time interval) is short, the difference of the background is small and the difference of the specific form of the character is small between the video frames with the minimum time interval, namely, the similarity between the video frames with the minimum time interval is high, and the maximum difference is the position offset of the character.
In the embodiment of the application, since the resolution of the video frame in the first video with the high-definition low frame is higher, the target entity object in the first video can be obtained according to the first video, and the definition of the target entity object is higher. Meanwhile, the frame rate of the video frame in the second video of the low-definition high-frame is higher, so that the position information of more entity objects is recorded in the second video, and therefore, the relative position information of the target entity object in the second video in the video frame can be obtained according to the second video.
Fig. 3 is a schematic diagram of a process of synthesizing frames to be filled according to an embodiment of the present application, as shown in fig. 3, a first video and a second video are videos obtained by shooting the same picture respectively, the first video is a high-definition low-frame video, the second video is a low-definition high-frame video, the first video includes a1 and A3 video frames with higher resolutions, and the second video includes B1, B2, and B3 video frames with lower resolutions, so that to obtain a video with a high-definition high frame, it is necessary to synthesize an a2 video frame, that is, a video frame corresponding to a B2 video frame in the second video and having the same time is a frame to be filled.
Therefore, the frame to be padded is an a2 video frame, and since the a1 and A3 video frames are the video frames with the minimum time interval between the first video and the frame to be padded a2, the similarity between the a1 and A3 video frames and the frame to be padded a2 is high, that is, the target entity object in the frame to be padded a2 can be determined according to the a1 and A3 video frames, specifically, the target entity object T can be identified from the a1 or A3 video frame in the first video, and in order to avoid that the a1 or A3 video frame does not contain the complete target entity object T, the target entity object T can also be identified from the a1 and A3 video frames at the same time.
Further, since the frames to be padded a2 that need to be synthesized correspond to the same time as the B2 video frames in the second video, the relative position information of the target entity object in the frames to be padded a2 is the same as the relative position information of the target entity object in the B2 video frames, and therefore, the relative position information of the target entity object in the video can be determined according to the B2 video frames in the second video.
And 103, synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
In this step, after the target entity object in the frame to be filled is determined according to the first video and the relative position information of the target entity object in the frame to be filled is determined according to the second video, the frame to be filled may be synthesized according to the target entity object and the relative position information, so that the frame to be filled is used to perform frame complementing processing on the first video of the high-definition low frame, and the video of the high-definition high frame is finally obtained.
Specifically, referring to fig. 3, a frame to be filled a2 may be first determined according to the first video and the second video, and the frame to be filled a2 may be a missing video frame in the first video of the high definition low frame compared with the second video of the low definition high frame; then determining a target entity object T contained in the frame A2 to be filled according to the A1 or A3 video frame with the minimum time interval between the first video and the frame A2 to be filled; determining the relative position information of the target entity object T contained in the frame to be filled A2 as (x) according to the B2 video frame of the second video with the same time corresponding to the frame to be filled A22,y2). Therefore, the finally synthesized frame to be padded a2 contains the high-resolution target entity object T extracted from the A3 video frame, and the position (x) of the target entity object T in the frame to be padded a2T,yT) Determined for (x) from B2 video frames2,y2)。
Further, after the frame a2 to be filled is determined, since the number of frames of the video frame with higher resolution in the first video of the high-definition low frame is small, the frame a2 to be filled may be used to perform frame supplementing processing on the first video of the high-definition low frame, so as to finally obtain the video of the high-definition high frame including the video frames a1, a2, and A3, so that the resolution of the video frame included in the video of the high-definition high frame is high, and the number of frames included in the video frame is large, thereby simultaneously ensuring the smoothness and the definition of the video frame.
It should be noted that the target entity object T determined from the A1 or A3 video frame and the relative position information (x) determined from the B2 video frame2,y2) The specific process of synthesizing the frame to be filled a2, in one implementation, may directly replace the entity object in the B2 video frame with the determined target entity object T; in another implementation, the entity object in the a1 or A3 video frame may be deleted such that the relative position information (x) in the background image of the deleted a1 or A3 video2,y2) The corresponding position is inserted into the target entity object T. Both implementations can be achievedHigh resolution band-filled video frame a 2.
To sum up, the method for acquiring a high-definition high-frame video provided by the embodiment of the present application includes: acquiring a first video shot by a first camera module and a second video shot by a second camera module, wherein the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video; determining a target entity object in the first video, and determining the relative position information of the target entity object in the video frame according to the second video; according to the target entity object and the relative position information, synthesizing a frame to be filled, and utilizing the frame to be filled to perform frame supplementing processing on the first video to obtain a high-definition high-frame video, in the application, two camera modules are adopted to respectively obtain a first video of a high-definition low frame and a second video of a low-definition high frame, the first video of the high-definition low frame is used to obtain a target entity object with higher resolution, the second video of the low-definition high frame is used to obtain the position information of the target entity object in the frame to be filled, so as to synthesize the frame to be filled, the frame to be filled is utilized to perform frame supplementing processing on the first video, and finally the high-definition high-frame video is obtained, so that one camera module with high pixel and high frame rate is avoided being used to obtain the high-definition high frame video, thereby reducing the workload or the number of output images of a single camera module, reducing the heat productivity of the camera modules, and reducing the influence of the heat productivity, the image quality of the finally obtained video is improved.
Fig. 4 is a flowchart illustrating specific steps of a method for acquiring a high-definition high-frame video according to an embodiment of the present application, as shown in fig. 4, the method may include:
step 201, acquiring a first video shot by the first camera module and a second video shot by the second camera module.
The implementation manner of this step is similar to the implementation process of step 101 described above, and is not described here again.
Step 202, determining a first target video frame in the first video, where the first target video frame is a video frame with a minimum time interval with the frame to be filled in all video frames of the first video.
In this step, a definite video frame of the second video having a larger number of frames than the first video may be determined from the first video and the second video, and the definite video frame may be determined as a frame to be padded, and a video frame having a minimum time interval with the frame to be padded in the first video may be determined as a first target video frame in the first video.
Referring to fig. 3, a frame to be padded a2 may be first determined from a first video and a second video, and a video frame in the first video, in which a time interval between the frame to be padded a2 is the smallest, is an a1 or A3 video frame, and thus, an a1 or A3 video frame may be determined as a first target video frame.
Meanwhile, since the first target video frame is used to determine the target entity object T contained in the video frame, and the a1 video frame does not contain the complete target entity object T, the A3 video frame may be determined as the first target video frame.
Step 203, determining a target entity object in the first target video frame.
In this step, a target entity object may be determined from a first target video frame in the first video.
Optionally, step 203 may specifically include:
substep 2031 of determining location information corresponding to the entity object in the second target video frame.
In this step, since the target entity object needs to be determined from the first target video frame and determined as the entity object included in the frame to be filled, the position change range where the target entity object is located in the first target video frame may be determined according to the position information corresponding to the entity object in the second target video frame in the second video, and the target entity object may be determined in the area corresponding to the position change range.
Specifically, referring to fig. 3, the second target video frame may be a video frame B2 of the second video with the same time corresponding to the frame a2 to be filled, i.e. the position of the target entity object T in the frame a2 to be filled is the same as the position of the entity object in the second target video frame B2In this case, the frame to be filled a2 and the first target video frame A3 are video frames with the smallest time interval, that is, the frame to be filled a2 and the first target video frame A3 have smaller deviation of the positions of the entity objects, and therefore, the position information (x) of the entity objects in the second target video frame B2 can be determined2,y2) The position variation range where the target entity object T is located in the first target video frame a3 is determined.
Substep 2032, determining a position change range of the entity object in the first target video frame according to the position information and a preset position change value.
In this step, the position change range of the entity object in the first target video frame may be determined according to the position information corresponding to the entity object in the second target video frame and a preset position change value set in advance.
Referring to fig. 3, the position information corresponding to the entity object in the second target video frame B2 is (x)2,y2) If the preset position variation value is (Δ x, Δ y), the position variation range of the physical object in the first target video frame a3 can be determined to be (x)2±△x,y2±△y)。
For example, if the frame rate of the first video is relatively low, the interval time between adjacent video frames (the video frame with the minimum interval time) is relatively long, and it is described that the position change of the entity object between the video frames with the minimum interval time is possibly relatively large, a relatively large preset position change value may be set; if the picture of the first video recording is a scene in which the object runs, which indicates that the position change of the entity object between the video frames with the minimum time interval is possibly large, a large preset position change value can be set, and if the picture of the first video recording is a scene in which the object walks, which indicates that the position change of the entity object between the video frames with the minimum time interval is possibly small, a small preset position change value can be set.
Sub-step 2033, determining the target entity object in the region of the first target video frame corresponding to the position change range.
In this step, after the position change range of the entity object in the first target video frame is determined, the target entity object may be determined within an area corresponding to the position change range in the first target video frame.
Referring to fig. 3, if the position of the physical object in the first target video frame a3 varies by (x)2±△x,y2± Δ y), then may be directly in the first target video frame a3 (x)2±△x,y2+ -. DELTA.y) to determine the target entity object T contained in the first target video frame.
And 204, determining a second target video frame with the same time corresponding to the frame to be filled in the second video.
Specifically, after step 203, the relative position information may be determined through steps 204 to 205, or may be determined through steps 206 to 207.
In this step, after the target entity object included in the frame to be padded is determined according to the first video, the relative position information of the target entity object in the frame to be padded may be further determined according to the second video.
Specifically, a second target video frame with the same time as the frame to be padded is determined from the second video, and since the time of the frame to be padded is the same as the time of the second target video frame, the position information of the entity object in the frame to be padded and the second target video frame is also the same.
Step 205, determining the position information corresponding to the entity object in the second target video frame as the relative position information.
In this step, the entity object included in the second target video frame may be determined first, and then the position information of the entity object included in the second target video frame is directly determined as the relative position information corresponding to the target entity object in the frame to be filled.
Step 206, determining a first target video frame and a third target video frame in the first video, where the first target video frame and the third target video frame are two video frames in the first video with a minimum time interval to the frame to be filled, and determining position information of a target entity object included in the first target video frame and the third target video frame.
In this step, after the target entity object included in the frame to be padded is determined according to the first video, the relative position information of the target entity object in the frame to be padded may be further determined jointly according to the first video and the second video.
Specifically, two video frames with the minimum time interval between the first video frame and the frame to be filled may be determined first, and the two video frames are used as the first target video frame and the third video frame in the first video, so that the relative position information corresponding to the target entity object in the frame to be filled is determined further according to the position information of the target entity object included in the first target video frame and the third target video frame.
Fig. 5 is a schematic diagram of another process for synthesizing a frame to be padded according to an embodiment of the present application, and as shown in fig. 5, first, it may be determined that a frame to be padded is a2 according to a first video and a second video, and then, it is determined that a first target video frame and a third target video frame with a minimum time interval between the first video and the frame to be padded a2 are a1 and A3, respectively, and it is determined that position information of a target entity object included in the first target video frame a1 and the third target video frame A3 are (x) respectively1,y1) And (x)3,y3)。
Step 207, determining the relative position information according to the position information of the target entity object contained in the first target video frame and the third target video frame.
In this step, the relative position information corresponding to the target entity object in the frame to be filled may be further determined according to the position information of the target entity object included in the first target video frame and the third target video frame.
Referring to fig. 5, the position information of the target entity object T included in the first target video frame a1 and the third target video frame A3 are (x) respectively1,y1) And (x)3,y3) Since the frame to be padded A2 is bitIn the video frame between the first target video frame a1 and the third target video frame A3, the target entity object T moves along a certain motion track in the consecutive a1, a2, A3 video frames, so that the average position information of the target entity object T contained in the first target video frame a1 and the third target video frame A3 can be obtained
Figure BDA0002962254040000131
The relative position information corresponding to the target entity object T in the frame a2 to be filled is directly determined.
Further, a frame to be padded a 2' may be synthesized according to the average position information and the target entity object T.
Optionally, if the movement of the target entity object T in the consecutive video frames a1, a2, and A3 is a constant speed process, the average position information may be directly determined as the relative position information corresponding to the target entity object T in the frame a 2' to be filled. If the movement of the target entity object T in the consecutive video frames a1, a2, A3 is a non-uniform speed process, the determined average position information needs to be compensated, and the position of the target entity object in a 2' is adjusted to obtain the final frame a2 to be filled. Correspondingly, step 207 may specifically include:
substep 2071, calculating average position information of the target entity object contained in the first target video frame and the third target video frame.
In this step, average position information of the target entity object contained in the first target video frame and the third target video frame may be first calculated.
In the embodiment of the present application, a video frame may include a plurality of entity objects, and thus, processing may be performed on the basis of each entity object. Fig. 6 is a schematic diagram of a process for determining average position information according to an embodiment of the present application, as shown in fig. 6, a video frame includes two entity objects P and J, where the position information of the entity object P in the first target video frame a1 is (x)p1,yp1) The position information of the physical object J is (x)J1,yJ1) Entity in the third target video frame a3The position information of the object P is (x)p3,yp3) The position information of the physical object J is (x)J3,yJ3)。
Accordingly, the average position information of the physical object P can be determined as
Figure BDA0002962254040000141
And takes it as the position information (x) of the entity object P in the frame to be padded a2p2,yp2) Determining the average position information of the physical object J as
Figure BDA0002962254040000142
And takes it as the position information (x) of the entity object P in the frame to be padded a2J2,yJ2)。
Substep 2072, determining a second target video frame in the second video, which has the same time as the frame to be filled, and determining the position information of the entity object in the second target video frame.
In this step, after determining the average position information, the determined average position information may be further compensated according to a second target video frame in a second video.
Wherein the second target video frame is a video frame of the second video having the same time corresponding to the frame to be padded, and referring to fig. 5, the position of the target entity object T in the frame to be padded a2 is the same as the position of the entity object in the second target video frame B2, and thus, the position information (x) of the entity object in the second target video frame B2 can be determined2,y2)。
Substep 2073, calibrating the average position information according to the position information of the entity object in the second target video frame to obtain the relative position information.
In this step, the average position information may be calibrated according to the position information of the entity object in the second target video frame, so as to obtain the relative position information corresponding to the target entity object in the frame to be filled.
Referring to fig. 5, the target entity object in a 2' obtained from the average position information may be matchedAccording to the position information (x) of the entity object in the second target video frame B22,y2) Calibration is performed to obtain the final frame to be padded a 2.
Fig. 7 is a schematic diagram of a position calibration process provided in an embodiment of the present application, as shown in fig. 7, a video frame includes two entity objects L and K, a video frame a2 'may be synthesized based on other video frames in a first video frame according to the entity objects L and K and average position information, and further, positions of the entity objects L and K in the video frame a 2' may be calibrated according to position information of the entity objects L and K in a second target video frame, so as to obtain a final frame a2 to be filled.
And step 208, deleting the entity object in the second target video frame.
In this step, after determining the target entity object included in the frame to be filled and the relative position information of the target entity object in the frame to be filled, the frame to be filled may be synthesized according to the target entity object and the corresponding relative position information.
Specifically, the entity object in the second target video frame may be deleted first.
Step 209, inserting the target entity object in the position corresponding to the relative position information in the second target video frame, synthesizing the frame to be filled, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
In this step, after deleting the entity object in the second target video frame, the target entity object is inserted into the position corresponding to the relative position information in the second target video frame, so as to synthesize the frame to be filled, and further perform frame supplementing processing on the first video of the high-definition low frame by using the frame to be filled, so as to finally obtain the video of the high-definition high frame.
In this embodiment of the application, since the second target video frame is a video frame in the second video, which has the same time as the frame to be filled, that is, the position of the entity object in the second target video frame is the same as the position of the target entity object in the frame to be filled, the entity object in the second target video frame can be directly replaced by the target entity object, so as to obtain the frame to be filled.
In the embodiment of the present application, the frame to be padded may be further synthesized based on the first target video frame or the third target video frame in the first video, where the time interval between the first target video frame and the frame to be padded is minimum. Specifically, the entity object included in the first target video frame or the third target video frame may be deleted, and the target entity object may be inserted into the position corresponding to the relative position information in the first target video frame or the third target video frame, so as to synthesize the frame to be filled.
Optionally, in the process of acquiring a high-definition high-frame video according to a first video captured by a first camera module and a second video captured by a second camera module, the heat generation amounts of the first camera module and the second camera module can be detected in real time, so that the working modes of the two camera modules can be adjusted according to the heat generation amounts of the first camera module and the second camera module, and specifically, the method includes the following steps:
step 210, detecting heat generation amounts of the first camera module and the second camera module.
In this step, the amounts of heat generation of the first and second image pickup modules can be detected in real time.
Specifically, the first camera module collects a first video with a high-definition low frame, and the second camera module collects a second video with a low-definition high frame, namely, the work loads of the two camera modules are different, so that the heat productivity generated by the work of the first camera module and the second camera module is different, namely, the heat productivity of one camera module in the first camera module and the second camera module is higher, and the heat productivity of one camera module is lower.
Step 211, determining a difference value of the heat generation amounts of the first camera module and the second camera module.
In this step, a difference in the amounts of heat generation between the first and second image pickup modules may be determined in accordance with the amounts of heat generation of the first and second image pickup modules.
And 212, controlling the first camera module to shoot to obtain the second video and controlling the second camera module to shoot to obtain the first video when the difference value is larger than or equal to a preset heating value.
In this step, if the difference between the heat generation amounts of the first camera module and the second camera module is greater than or equal to the preset heat generation amount, it indicates that the difference between the work loads of the first camera module and the second camera module is large, resulting in a large difference between the heat generation amounts of the two camera modules, the two camera modules can be controlled to mutually adjust the working mode, the camera module with smaller workload and smaller calorific value is adjusted to the working module with larger workload, the workload of the camera module is increased, the camera module with larger workload and larger calorific value is adjusted to the working mode with smaller workload, the workload of the camera module is reduced, therefore, the working load between the two camera modules is balanced, the heat productivity between the two camera modules is balanced, and the reduction of the working performance of one camera module caused by overlarge heat productivity is avoided.
To sum up, the method for acquiring a high-definition high-frame video provided by the embodiment of the present application includes: acquiring a first video shot by a first camera module and a second video shot by a second camera module, wherein the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video; determining a target entity object in the first video, and determining the relative position information of the target entity object in the video frame according to the second video; according to the target entity object and the relative position information, synthesizing a frame to be filled, and utilizing the frame to be filled to perform frame supplementing processing on the first video to obtain a high-definition high-frame video, in the application, two camera modules are adopted to respectively obtain a first video of a high-definition low frame and a second video of a low-definition high frame, the first video of the high-definition low frame is used to obtain a target entity object with higher resolution, the second video of the low-definition high frame is used to obtain the position information of the target entity object in the frame to be filled, so as to synthesize the frame to be filled, the frame to be filled is utilized to perform frame supplementing processing on the first video, and finally the high-definition high-frame video is obtained, so that one camera module with high pixel and high frame rate is avoided being used to obtain the high-definition high frame video, thereby reducing the workload or the number of output images of a single camera module, reducing the heat productivity of the camera modules, and reducing the influence of the heat productivity, the image quality of the finally obtained video is improved.
In addition, the working modes of the two camera modules can be controlled to be adjusted mutually according to the actual heat productivity of the two camera modules, so that the working load between the two camera modules is balanced, the heat productivity between the two camera modules is balanced, and the reduction of the working performance of one camera module due to overlarge heat productivity is avoided.
Fig. 8 is a flowchart illustrating steps of another method for acquiring high-definition high-frame video according to an embodiment of the present application, where as shown in fig. 8, the method may include:
step 301, acquiring a target video shot by the target camera module and a plurality of other videos shot by the other camera modules, where a frame rate of the target video is less than frame rates of the other videos, and a resolution of the target video is greater than resolutions of the other videos.
In this embodiment of the application, the electronic device for acquiring high-definition high-frame video may include a plurality of camera modules, including a target camera module capable of acquiring high-definition low-frame video, and a plurality of other camera modules capable of acquiring low-definition high-frame video.
Step 302, determining a target entity object in the target video, and determining the relative position information of the target entity object in the video frame according to the other videos.
In this step, a target entity object may be determined from the target video. Due to the fact that the resolution of the video frame in the target video with the high-definition low frame is high, the target entity object in the target video can be obtained according to the target video, and the definition of the target entity object is high. Meanwhile, the frame rate of the video frame in other videos of the low-definition high-frame is higher, so that the position information of more entity objects is recorded in other videos, and the relative position information of the target entity object in the video frame in other videos can be acquired according to other videos.
In this embodiment of the present application, a plurality of other camera modules may be included, that is, other videos of a plurality of low-definition high-frame images may be acquired by a plurality of other camera modules, and the number of frames of different other videos may be the same or different, fig. 9 is a schematic diagram of a process for synthesizing a frame to be filled provided in this embodiment of the present application, as shown in fig. 9, a first video, a second video, and a third video are videos captured for a same picture, the first video is a target video of a high-definition low frame, the first video includes a1, a4, and a7 video frames with higher resolutions, the second video and the third video are other videos of a low-definition high frame, the frame rates of the second video and the third video are the same and are twice as those of the first video, the second video includes B2, B4, and B6 video frames with lower resolutions, and the third video includes C1, C3, B3538, and B3538 video frames with lower resolutions, C5 and C7 video frames. Therefore, in order to obtain a high-definition high-frame video, a2, A3, a5 and a6 video frames need to be synthesized, wherein the a2 video frame is a frame to be filled with the same time corresponding to the B2 video frame in the second video; the A3 video frame is a frame to be filled with the same time corresponding to the C3 video frame in the third video; the A5 video frame is a frame to be filled with the same time corresponding to the C5 video frame in the third video; the a6 video frame is a frame to be padded at the same time as the B6 video frame in the second video.
Step 303, synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the target video by using the frame to be filled to obtain a high-definition high-frame video.
In this step, after the target entity object in the frame to be filled is determined according to the target video and the relative position information of the target entity object in the frame to be filled is determined according to other videos, the frame to be filled may be synthesized according to the target entity object and the relative position information, so that the frame to be filled is used to perform frame supplementing processing on the target video with a high-definition low frame, and the video with a high-definition high frame is finally obtained.
Specifically, referring to fig. 9, first, frames to be filled a2, A3, a5 and a6 may be determined from a first video as a target video, and second and third videos as other videos, and the frames to be filled a2, A3, a5 and a6 may be video frames missing from the first video of high definition low frames compared with the second and third videos of low definition high frames; then, determining a target entity object contained in the frame A2 to be filled according to the A1 video frame with the minimum time interval between the frame A2 to be filled and the first video; and determining the relative position information of the target entity object contained in the frame to be filled A2 according to the video frame B2 of the second video with the same time corresponding to the frame to be filled A2, thereby determining the target entity object with high resolution contained in the frame to be filled A2 and the relative position information of the target entity object in the frame to be filled A2, thereby synthesizing the frame to be filled A2. And further sequentially determining the high-resolution target entity objects contained in the frames A3, a5 and a6 to be filled and the relative position information of the target entity objects in the frames A3, a5 and a6 to be filled, thereby synthesizing the frames A3, a5 and a6 to be filled.
Further, after the frames a2, A3, a5 and A6 to be filled are determined, because the number of frames of video frames with higher resolution in the first video of the high-definition low frame is small, the first video of the high-definition low frame can be subjected to frame supplementing processing by using the frames a2, A3, a5 and A6 to be filled, so that the video of the high-definition high frame including the video frames a1, a2, A3, a4, a5, A6 and a7 is finally obtained, so that the resolution of the video frames included in the high-definition high frame video is higher, and the number of frames included in the video frames is more, thereby simultaneously ensuring the smoothness and the definition of the video frames.
In addition, it should be noted that, in the process of acquiring the high-definition high-frame video, only the entity object in the video frame is identified and the position information of the entity object is determined, and no processing is performed on the background portion in the video frame. Therefore, in the process of identifying and determining the position information of the entity object in the video frame, the video frame can be subjected to blocking processing in advance, the target block where the entity object in the video frame is located is determined, and then the entity object is identified and the position information of the entity object is determined only in the area corresponding to the target block in the video frame, so that the data processing amount in the whole process is reduced.
Fig. 10 is a schematic diagram of a block processing of a video frame according to an embodiment of the present application, and as shown in fig. 10, a video frame with a resolution of 4K, that is, a frame containing 3840 × 2160 pixels per inch, may be divided into 8 × 6 blocks, each of which contains 480 × 360 pixels; a video frame having a resolution of 720P, i.e., containing 1280 × 720 pixels per inch in one frame, may be divided into 8 × 6 blocks, each containing 160 × 120 pixels. For example, if the first target video frame in the first video is a video frame with a resolution of 4K as shown in fig. 10, in the process of identifying the entity object in the first target video frame, all 3840 × 2160 pixels included in the first target video frame need to be identified and processed. If the video frame is subjected to the blocking processing, and a partial target block, for example, 10 blocks, in which the entity object in the first target video frame is located is determined, only 4800 × 3600 pixels included in the target block in the first target video frame can be identified and processed, so that the data processing amount is reduced.
In addition, in the embodiment of the application, because an image can be accurately reconstructed on the relational dictionary by a group of sparse representation coefficients under a very harsh condition, the low-resolution image block dictionary and the high-resolution image block dictionary can be jointly trained by the obtained sample low-resolution image block and the sample high-resolution image block to obtain the low-resolution image block dictionary and the high-resolution image block dictionary, and then the high-resolution image block is reconstructed by combining the obtained sparse representation corresponding to the low-resolution image block and the trained low-resolution image block dictionary and high-resolution image block dictionary to complete the improvement of the video resolution.
Furthermore, a relational dictionary set of the front frame and the rear frame can be obtained by training by utilizing the collected training data sets (the front frame and the rear frame of the video frames), and the front frame and the rear frame can be obtained by utilizing the relational dictionary set obtained by learning; sparse representation of two frames. And then establishing a function mapping by utilizing nonlinear regression, wherein the input of the function mapping is sparse representation according to the previous frame to obtain sparse representation of the next frame, and the output of the function mapping is a function mapping coefficient. And finally, performing frame interpolation processing on the acquired video with the low frame rate by using the learned relational dictionary set and the function mapping relation so as to finish the rate improvement of the video frame.
It should be noted that, in the method for acquiring a high-definition high-frame video provided in the embodiment of the present application, the execution main body may be an acquisition apparatus for a high-definition high-frame video, or a control module in the acquisition apparatus for a high-definition high-frame video for executing the method for acquiring a loaded high-definition high-frame video. In the embodiment of the present application, an obtaining method for loading a high-definition high-frame video performed by a obtaining device of the high-definition high-frame video is taken as an example, and the obtaining method of the high-definition high-frame video provided by the embodiment of the present application is described.
Fig. 11 is a block diagram of an apparatus for acquiring a high definition high frame video according to an embodiment of the present application, and as shown in fig. 11, the apparatus 400 includes:
a first obtaining module 401, configured to obtain a first video captured by the first camera module and a second video captured by the second camera module, where a frame rate of the first video is smaller than a frame rate of the second video, and a resolution of the first video is greater than a resolution of the second video;
a first determining module 402, configured to determine a target entity object in the first video, and determine, according to the second video, relative position information of the target entity object in a video frame;
a first synthesizing module 403, configured to synthesize a frame to be filled according to the target entity object and the relative position information, and perform frame supplementing processing on the first video by using the frame to be filled, so as to obtain a high-definition high-frame video.
Optionally, the first determining module 402 specifically includes:
a first determining submodule, configured to determine a first target video frame in the first video, where the first target video frame is a video frame with a smallest time interval with the frame to be padded in all video frames of the first video;
a second determining submodule, configured to determine a target entity object in the first target video frame;
the first determining module 402, further comprising:
a third determining submodule, configured to determine a second target video frame in the second video, where the second target video frame corresponds to the frame to be padded and has the same time;
and the fourth determining submodule is used for determining the position information corresponding to the entity object in the second target video frame as the relative position information.
Optionally, the first synthesizing module 403 specifically includes:
a deleting submodule, configured to delete an entity object in the second target video frame;
and the synthesis submodule is used for inserting the target entity object into the position corresponding to the relative position information in the second target video frame and synthesizing the frame to be filled.
Optionally, the second determining sub-module specifically includes:
the first determining unit is used for determining the position information corresponding to the entity object in the second target video frame;
a second determining unit, configured to determine a position change range of the entity object in the first target video frame according to the position information and a preset position change value;
a third determining unit, configured to determine the target entity object in an area corresponding to the position change range in the first target video frame.
Optionally, the first determining module 402 specifically includes:
a fifth determining submodule, configured to determine a first target video frame and a third target video frame in the first video, where the first target video frame and the third target video frame are two video frames in the first video with a minimum time interval to the frame to be filled, and determine position information of a target entity object included in the first target video frame and the third target video frame;
and a sixth determining submodule, configured to determine the relative position information according to position information of the target entity object included in the first target video frame and the third target video frame.
Optionally, the sixth determining sub-module specifically includes:
a calculating unit, configured to calculate average position information of a target entity object included in the first target video frame and the third target video frame;
a fourth determining unit, configured to determine a second target video frame in the second video, where the time of the second target video frame is the same as that of the frame to be filled, and determine location information of an entity object in the second target video frame;
and the calibration unit is used for calibrating the average position information according to the position information of the entity object in the second target video frame to obtain the relative position information.
Optionally, the apparatus further comprises:
a detection module configured to detect heat generation amounts of the first camera module and the second camera module;
a second determination module configured to determine a difference in heat generation amounts of the first camera module and the second camera module;
and the control module is used for controlling the first camera module to shoot to obtain the second video and the second camera module to shoot to obtain the first video under the condition that the difference value is larger than or equal to the preset calorific value.
Fig. 12 is a block diagram of another apparatus for acquiring high-definition high-frame video according to an embodiment of the present application, and as shown in fig. 12, the apparatus 500 includes:
a second obtaining module 501, configured to obtain a target video captured by the target camera module and a plurality of other videos captured by the other camera modules, where a frame rate of the target video is smaller than frame rates of the other videos, and a resolution of the target video is greater than resolutions of the other videos;
a third determining module 502, configured to determine a target entity object in the target video, and determine, according to the other videos, relative position information of the target entity object in a video frame;
and a second synthesizing module 503, configured to synthesize a frame to be filled according to the target entity object and the relative position information, and perform frame supplementing processing on the target video by using the frame to be filled, so as to obtain a high-definition high-frame video.
The device for acquiring the high-definition high-frame video in the embodiment of the application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The device for acquiring high-definition high-frame video in the embodiment of the present application may be a device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The device for acquiring a high-definition high-frame video provided in the embodiment of the present application can implement each process implemented by the device for acquiring a high-definition high-frame video in the method embodiments of fig. 1, fig. 4, and fig. 8, and for avoiding repetition, details are not repeated here.
To sum up, the apparatus for acquiring a high-definition high-frame video provided by the embodiment of the present application includes: acquiring a first video shot by a first camera module and a second video shot by a second camera module, wherein the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video; determining a target entity object in the first video, and determining the relative position information of the target entity object in the video frame according to the second video; according to the target entity object and the relative position information, synthesizing a frame to be filled, and utilizing the frame to be filled to perform frame supplementing processing on the first video to obtain a high-definition high-frame video, in the application, two camera modules are adopted to respectively obtain a first video of a high-definition low frame and a second video of a low-definition high frame, the first video of the high-definition low frame is used to obtain a target entity object with higher resolution, the second video of the low-definition high frame is used to obtain the position information of the target entity object in the frame to be filled, so as to synthesize the frame to be filled, the frame to be filled is utilized to perform frame supplementing processing on the first video, and finally the high-definition high-frame video is obtained, so that one camera module with high pixel and high frame rate is avoided being used to obtain the high-definition high frame video, thereby reducing the workload or the number of output images of a single camera module, reducing the heat productivity of the camera modules, and reducing the influence of the heat productivity, the image quality of the finally obtained video is improved.
Optionally, an embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction is executed by the processor to implement each process of the above-mentioned embodiment of the method for acquiring a high-definition high-frame video, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 13 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 13 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 610 is configured to acquire a first video captured by the first camera module and a second video captured by the second camera module, where a frame rate of the first video is smaller than a frame rate of the second video, and a resolution of the first video is greater than a resolution of the second video;
determining a target entity object in the first video, and determining the relative position information of the target entity object in a video frame according to the second video;
and synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
According to the method and the device, the two camera modules are adopted to respectively acquire the first video of the high-definition low frame and the second video of the low-definition high frame, the target entity object with higher resolution ratio is acquired through the first video of the high-definition low frame, the position information of the target entity object in the frame to be filled is acquired through the second video of the low-definition high frame, the frame to be filled is synthesized, the frame supplementing processing is performed on the first video through the frame to be filled, the high-definition high frame video is finally acquired, the camera module with a high pixel and high frame rate is prevented from being used for acquiring the high-definition high frame video, the workload or the output image quantity of a single camera module is reduced, the heat productivity of the camera module is reduced, the influence of the heat productivity of the camera module on the whole machine performance is reduced, and the image quality of the finally acquired video is improved.
Optionally, the processor 610 is further configured to determine a first target video frame in the first video, where the first target video frame is a video frame with a smallest time interval with the frame to be padded among all video frames of the first video;
determining a target entity object in the first target video frame;
determining a second target video frame in the second video, wherein the time of the second target video frame is the same as that of the frame to be filled;
and determining the position information corresponding to the entity object in the second target video frame as the relative position information.
Optionally, the processor 610 is further configured to delete the entity object in the second target video frame;
and inserting the target entity object into the position corresponding to the relative position information in the second target video frame, and synthesizing the frame to be filled.
Optionally, the processor 610 is further configured to determine position information corresponding to an entity object in the second target video frame;
determining the position change range of the entity object in the first target video frame according to the position information and a preset position change value;
and determining the target entity object in the area corresponding to the position change range in the first target video frame.
Optionally, the processor 610 is further configured to determine a first target video frame and a third target video frame in the first video, where the first target video frame and the third target video frame are two video frames in the first video, where a time interval between the first target video frame and the frame to be filled is minimum, and determine location information of a target entity object included in the first target video frame and the third target video frame;
and determining the relative position information according to the position information of the target entity object contained in the first target video frame and the third target video frame.
Optionally, the processor 610 is further configured to calculate average position information of target entity objects included in the first target video frame and the third target video frame;
determining a second target video frame in the second video, which corresponds to the frame to be filled and has the same time, and determining position information of an entity object in the second target video frame;
and calibrating the average position information according to the position information of the entity object in the second target video frame to obtain the relative position information.
Optionally, the processor 610 is further configured to detect heat generation amounts of the first camera module and the second camera module;
determining a difference value of heat generation amounts of the first camera module and the second camera module;
and under the condition that the difference value is larger than or equal to a preset heating value, controlling the first camera module to shoot to obtain the second video, and controlling the second camera module to shoot to obtain the first video.
Optionally, the processor 610 is further configured to obtain a target video captured by the target camera module and a plurality of other videos captured by the other camera modules, where a frame rate of the target video is smaller than frame rates of the other videos, and a resolution of the target video is greater than resolutions of the other videos;
determining a target entity object in the target video, and determining the relative position information of the target entity object in a video frame according to the other videos;
and synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the target video by using the frame to be filled to obtain the high-definition high-frame video.
According to the method and the device, the two camera modules are adopted to respectively acquire the first video of the high-definition low frame and the second video of the low-definition high frame, the target entity object with higher resolution ratio is acquired through the first video of the high-definition low frame, the position information of the target entity object in the frame to be filled is acquired through the second video of the low-definition high frame, the frame to be filled is synthesized, the frame supplementing processing is performed on the first video through the frame to be filled, the high-definition high frame video is finally acquired, the camera module with a high pixel and high frame rate is prevented from being used for acquiring the high-definition high frame video, the workload or the output image quantity of a single camera module is reduced, the heat productivity of the camera module is reduced, the influence of the heat productivity of the camera module on the whole machine performance is reduced, and the image quality of the finally acquired video is improved.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the method for acquiring a high-definition high-frame video, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction, so as to implement each process of the above embodiment of the method for acquiring a high-definition high-frame video, and achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. A method for acquiring a high-definition high-frame video is applied to an electronic device, wherein the electronic device is provided with a first camera module and a second camera module, and the method comprises the following steps:
acquiring a first video shot by the first camera module and a second video shot by the second camera module, wherein the frame rate of the first video is less than that of the second video, and the resolution of the first video is greater than that of the second video;
determining a target entity object in the first video, and determining the relative position information of the target entity object in a video frame according to the second video;
and synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
2. The method according to claim 1, wherein the step of determining the target entity object in the first video specifically comprises:
determining a first target video frame in the first video, wherein the first target video frame is a video frame with the smallest time interval with the frame to be filled in all video frames of the first video;
determining a target entity object in the first target video frame;
the step of determining the relative position information of the target entity object in the video frame specifically includes:
determining a second target video frame in the second video, wherein the time of the second target video frame is the same as that of the frame to be filled;
and determining the position information corresponding to the entity object in the second target video frame as the relative position information.
3. The method according to claim 2, wherein the step of synthesizing the frame to be padded according to the target entity object and the relative position information specifically includes:
deleting the entity object in the second target video frame;
and inserting the target entity object into the position corresponding to the relative position information in the second target video frame, and synthesizing the frame to be filled.
4. The method according to claim 2, wherein the step of determining the target entity object in the first target video frame specifically comprises:
determining position information corresponding to the entity object in the second target video frame;
determining the position change range of the entity object in the first target video frame according to the position information and a preset position change value;
and determining the target entity object in the area corresponding to the position change range in the first target video frame.
5. The method according to claim 1, wherein the step of determining the relative position information of the target entity object in the video frame specifically comprises:
determining a first target video frame and a third target video frame in the first video, wherein the first target video frame and the third target video frame are two video frames with the minimum time interval between the first video and the frame to be filled, and determining position information of a target entity object contained in the first target video frame and the third target video frame;
and determining the relative position information according to the position information of the target entity object contained in the first target video frame and the third target video frame.
6. The method according to claim 5, wherein the step of determining the relative position information according to the position information of the target entity object contained in the first target video frame and the third target video frame specifically comprises:
calculating average position information of the target entity object contained in the first target video frame and the third target video frame;
determining a second target video frame in the second video, which corresponds to the frame to be filled and has the same time, and determining position information of an entity object in the second target video frame;
and calibrating the average position information according to the position information of the entity object in the second target video frame to obtain the relative position information.
7. The method of claim 1, further comprising:
detecting heat generation amounts of the first camera module and the second camera module;
determining a difference value of heat generation amounts of the first camera module and the second camera module;
and under the condition that the difference value is larger than or equal to a preset heating value, controlling the first camera module to shoot to obtain the second video, and controlling the second camera module to shoot to obtain the first video.
8. A method for acquiring a high-definition high-frame video is applied to an electronic device, wherein the electronic device is provided with a target camera module and a plurality of other camera modules, and the method comprises the following steps:
acquiring a target video shot by the target camera module and a plurality of other videos shot by the other camera modules, wherein the frame rate of the target video is less than that of the other videos, and the resolution of the target video is greater than that of the other videos;
determining a target entity object in the target video, and determining the relative position information of the target entity object in a video frame according to the other videos;
and synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the target video by using the frame to be filled to obtain the high-definition high-frame video.
9. The utility model provides an acquisition device of high definition high frame video, is applied to electronic equipment, electronic equipment has first camera module and second camera module, its characterized in that, the device includes:
a first obtaining module, configured to obtain a first video captured by the first camera module and a second video captured by the second camera module, where a frame rate of the first video is smaller than a frame rate of the second video, and a resolution of the first video is greater than a resolution of the second video;
the first determining module is used for determining a target entity object in the first video and determining the relative position information of the target entity object in a video frame according to the second video;
and the first synthesis module is used for synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the first video by using the frame to be filled to obtain a high-definition high-frame video.
10. The apparatus according to claim 9, wherein the first determining module specifically includes:
a first determining submodule, configured to determine a first target video frame in the first video, where the first target video frame is a video frame with a smallest time interval with the frame to be padded in all video frames of the first video;
a second determining submodule, configured to determine a target entity object in the first target video frame;
the first determining module further includes:
a third determining submodule, configured to determine a second target video frame in the second video, where the second target video frame corresponds to the frame to be padded and has the same time;
and the fourth determining submodule is used for determining the position information corresponding to the entity object in the second target video frame as the relative position information.
11. The apparatus according to claim 10, wherein the first synthesis module specifically comprises:
a deleting submodule, configured to delete an entity object in the second target video frame;
and the synthesis submodule is used for inserting the target entity object into the position corresponding to the relative position information in the second target video frame and synthesizing the frame to be filled.
12. The apparatus according to claim 10, wherein the second determining submodule specifically includes:
the first determining unit is used for determining the position information corresponding to the entity object in the second target video frame;
a second determining unit, configured to determine a position change range of the entity object in the first target video frame according to the position information and a preset position change value;
a third determining unit, configured to determine the target entity object in an area corresponding to the position change range in the first target video frame.
13. The apparatus according to claim 9, wherein the first determining module specifically includes:
a fifth determining submodule, configured to determine a first target video frame and a third target video frame in the first video, where the first target video frame and the third target video frame are two video frames in the first video with a minimum time interval to the frame to be filled, and determine position information of a target entity object included in the first target video frame and the third target video frame;
and a sixth determining submodule, configured to determine the relative position information according to position information of the target entity object included in the first target video frame and the third target video frame.
14. The apparatus according to claim 13, wherein the sixth determining submodule specifically includes:
a calculating unit, configured to calculate average position information of a target entity object included in the first target video frame and the third target video frame;
a fourth determining unit, configured to determine a second target video frame in the second video, where the time of the second target video frame is the same as that of the frame to be filled, and determine location information of an entity object in the second target video frame;
and the calibration unit is used for calibrating the average position information according to the position information of the entity object in the second target video frame to obtain the relative position information.
15. The apparatus of claim 9, further comprising:
a detection module configured to detect heat generation amounts of the first camera module and the second camera module;
a second determination module configured to determine a difference in heat generation amounts of the first camera module and the second camera module;
and the control module is used for controlling the first camera module to shoot to obtain the second video and the second camera module to shoot to obtain the first video under the condition that the difference value is larger than or equal to the preset calorific value.
16. An acquisition device of high-definition high-frame video, which is applied to electronic equipment, wherein the electronic equipment is provided with a target camera module and a plurality of other camera modules, and the device is characterized by comprising:
a second obtaining module, configured to obtain a target video captured by the target camera module and a plurality of other videos captured by the other camera modules, where a frame rate of the target video is smaller than frame rates of the other videos, and a resolution of the target video is greater than resolutions of the other videos;
a third determining module, configured to determine a target entity object in the target video, and determine, according to the other videos, relative position information of the target entity object in a video frame;
and the second synthesis module is used for synthesizing a frame to be filled according to the target entity object and the relative position information, and performing frame supplementing processing on the target video by using the frame to be filled to obtain the high-definition high-frame video.
17. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the method for acquiring high definition high frame video according to any one of claims 1 to 8.
18. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method for acquiring high definition high frame video according to any one of claims 1 to 8.
CN202110241209.7A 2021-03-04 2021-03-04 Method and device for acquiring high-definition high-frame video and electronic equipment Active CN113014817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110241209.7A CN113014817B (en) 2021-03-04 2021-03-04 Method and device for acquiring high-definition high-frame video and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110241209.7A CN113014817B (en) 2021-03-04 2021-03-04 Method and device for acquiring high-definition high-frame video and electronic equipment

Publications (2)

Publication Number Publication Date
CN113014817A true CN113014817A (en) 2021-06-22
CN113014817B CN113014817B (en) 2022-11-29

Family

ID=76405555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110241209.7A Active CN113014817B (en) 2021-03-04 2021-03-04 Method and device for acquiring high-definition high-frame video and electronic equipment

Country Status (1)

Country Link
CN (1) CN113014817B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113852757A (en) * 2021-09-03 2021-12-28 维沃移动通信(杭州)有限公司 Video processing method, device, equipment and storage medium
WO2023273049A1 (en) * 2021-06-29 2023-01-05 杭州海康威视系统技术有限公司 Method and apparatus for analyzing positional relationship between target objects, and storage medium and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027664A1 (en) * 2008-08-04 2010-02-04 Kabushiki Kaisha Toshiba Image Processing Apparatus and Image Processing Method
US20150104116A1 (en) * 2012-03-05 2015-04-16 Thomason Licensing Method and apparatus for performing super-resolution
US20160196473A1 (en) * 2013-10-16 2016-07-07 Huawei Technologies Co., Ltd. Video Extraction Method and Device
WO2018020638A1 (en) * 2016-07-28 2018-02-01 富士機械製造株式会社 Imaging device, imaging system, and image processing method
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment
CN111698553A (en) * 2020-05-29 2020-09-22 维沃移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium
CN111784570A (en) * 2019-04-04 2020-10-16 Tcl集团股份有限公司 Video image super-resolution reconstruction method and device
WO2020253103A1 (en) * 2019-06-17 2020-12-24 睿魔智能科技(深圳)有限公司 Video image processing method, device, apparatus, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027664A1 (en) * 2008-08-04 2010-02-04 Kabushiki Kaisha Toshiba Image Processing Apparatus and Image Processing Method
US20150104116A1 (en) * 2012-03-05 2015-04-16 Thomason Licensing Method and apparatus for performing super-resolution
US20160196473A1 (en) * 2013-10-16 2016-07-07 Huawei Technologies Co., Ltd. Video Extraction Method and Device
WO2018020638A1 (en) * 2016-07-28 2018-02-01 富士機械製造株式会社 Imaging device, imaging system, and image processing method
CN111784570A (en) * 2019-04-04 2020-10-16 Tcl集团股份有限公司 Video image super-resolution reconstruction method and device
CN110198412A (en) * 2019-05-31 2019-09-03 维沃移动通信有限公司 A kind of video recording method and electronic equipment
WO2020253103A1 (en) * 2019-06-17 2020-12-24 睿魔智能科技(深圳)有限公司 Video image processing method, device, apparatus, and storage medium
CN111698553A (en) * 2020-05-29 2020-09-22 维沃移动通信有限公司 Video processing method and device, electronic equipment and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023273049A1 (en) * 2021-06-29 2023-01-05 杭州海康威视系统技术有限公司 Method and apparatus for analyzing positional relationship between target objects, and storage medium and electronic device
CN113852757A (en) * 2021-09-03 2021-12-28 维沃移动通信(杭州)有限公司 Video processing method, device, equipment and storage medium
CN113852757B (en) * 2021-09-03 2023-05-26 维沃移动通信(杭州)有限公司 Video processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113014817B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
US10600157B2 (en) Motion blur simulation
CN103428427B (en) Image size readjusts method and image size readjusts device
EP2525321B1 (en) Display image generating method
US7916177B2 (en) Image-capturing apparatus, image-capturing method and program for detecting and correcting image blur
US20200045219A1 (en) Control method, control apparatus, imaging device, and electronic device
US7154541B2 (en) Image processing device
WO2020114251A1 (en) Video stitching method and apparatus, electronic device, and computer storage medium
CN111028189A (en) Image processing method, image processing device, storage medium and electronic equipment
US20100321505A1 (en) Target tracking apparatus, image tracking apparatus, methods of controlling operation of same, and digital camera
US8861846B2 (en) Image processing apparatus, image processing method, and program for performing superimposition on raw image or full color image
CN113014817B (en) Method and device for acquiring high-definition high-frame video and electronic equipment
US20080212888A1 (en) Frame Region Filters
WO2023160496A1 (en) Photographing method, photographing apparatus, electronic device and readable storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN111835982A (en) Image acquisition method, image acquisition device, electronic device, and storage medium
CN113132695A (en) Lens shadow correction method and device and electronic equipment
WO2022111717A1 (en) Image processing method and apparatus, and electronic device
CN111479059A (en) Photographing processing method and device, electronic equipment and storage medium
CN110351508A (en) Stabilization treating method and apparatus based on RECORD mode, electronic equipment
CN114390201A (en) Focusing method and device thereof
CN112637496B (en) Image correction method and device
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
CN115514860A (en) Dynamic frame rate compensation method, image processing circuit and electronic device
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN114782280A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant