CN112637573A - Multi-lens switching display method and system, intelligent terminal and storage medium - Google Patents

Multi-lens switching display method and system, intelligent terminal and storage medium Download PDF

Info

Publication number
CN112637573A
CN112637573A CN202011541472.XA CN202011541472A CN112637573A CN 112637573 A CN112637573 A CN 112637573A CN 202011541472 A CN202011541472 A CN 202011541472A CN 112637573 A CN112637573 A CN 112637573A
Authority
CN
China
Prior art keywords
frame
lens
rgb
channel color
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011541472.XA
Other languages
Chinese (zh)
Inventor
王尊正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sz Zunzheng Digital Video Co ltd
Original Assignee
Sz Zunzheng Digital Video Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sz Zunzheng Digital Video Co ltd filed Critical Sz Zunzheng Digital Video Co ltd
Priority to CN202011541472.XA priority Critical patent/CN112637573A/en
Publication of CN112637573A publication Critical patent/CN112637573A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • H04N23/16Optical arrangements associated therewith, e.g. for beam-splitting or for colour correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Abstract

The application relates to a multi-lens switching display method, a multi-lens switching display system, an intelligent terminal and a storage medium, which relate to the field of photographic technology and comprise the following steps: acquiring lens information shot by each camera; extracting a frame image in the lens information, and acquiring a reference characteristic; establishing an RGB color space model; generating preset three-channel color information; determining RGB three-channel color information of a frame image; and comparing the RGB three-channel color information of each frame of image with preset RGB three-channel color information, and calibrating the RGB three-channel color information of each frame of image if the results are inconsistent. According to the method and the device, the color information of each frame of image is determined according to the RGB three-channel color information with the reference characteristics, and then the color information of each frame of image is compared with the preset three-channel color information and calibrated, so that the colors of the video styles shot by all the lenses tend to be consistent, the condition that the colors of the same scene are obviously changed during lens switching is reduced, and the impression degree is improved.

Description

Multi-lens switching display method and system, intelligent terminal and storage medium
Technical Field
The present application relates to the field of photography technologies, and in particular, to a multi-lens switching display method and system, an intelligent terminal, and a storage medium.
Background
It is known that videos such as movies and dramas are seen by people are spliced by a plurality of video segments. In terms of video shooting technique, there are long shots and short shots, which are different in the shooting time of the same shot, but basically no film is shot from one long shot to the bottom, that is, no matter the shot is a long shot or a short shot, a plurality of shots are spliced together to form a complete movie and television work.
When the same scene is photographed by a plurality of devices, there may be a difference in color in images photographed by different devices. For example, a scene is shot by 4 machines, and an image shot by the No. 1 machine is green due to the fact that the camera can shoot the scene; machine number 2 may be yellow, etc. At this time, if the lens is directly switched, the change of the color in the same scene can be obviously sensed, so that the impression degree is poor.
Disclosure of Invention
In order to reduce the situation that the color changes obviously during lens switching, the application provides a multi-lens switching display method, a multi-lens switching display system, an intelligent terminal and a storage medium.
In a first aspect, the present application provides a multi-shot switching display method, including the following steps:
acquiring lens information shot by each camera;
extracting a frame image in the lens information, and acquiring a reference characteristic according to the frame image;
establishing an RGB color space model;
generating preset RGB three-channel color information according to the RGB color space model;
determining RGB three-channel color information of the lens background according to the reference characteristics;
and calibrating the RGB three-channel color information of each frame of image according to the RGB three-channel color information of each lens background and the preset RGB three-channel color information.
By adopting the technical scheme, the similar video frames in the scene shot by each lens are less in transformation and similar in characteristics, the reference characteristics are obtained according to the frame images, the RGB three-channel color information of the reference characteristics is determined according to the established RGB color space model, so that the color information of each frame image is determined, and then the color information of each frame image is compared with the preset three-channel color information and calibrated, so that the colors of the video styles shot by each lens tend to be similar, the condition that the colors of the same scene are obviously changed when the lenses are switched is reduced, and the impression degree is improved.
Optionally, after the step of calibrating the RGB three-channel color information of each frame of image according to the RGB three-channel color information of each lens background and the preset RGB three-channel color information, the method further includes the following steps:
acquiring front and rear frame images at a lens switching position;
acquiring relevant reference characteristics according to the front frame image and the rear frame image;
and performing definition detection on the associated reference features, comparing the definition of the associated reference features before and after, and marking the images of the two frames before and after if the results are inconsistent.
By adopting the technical scheme, if the definition of the reference features of the same type in the images before and after the lens switching is different, even if the colors of the lenses are consistent, the condition of poor appearance degree can be caused, the definition of the frame images before and after the lens switching is detected, and the frame images with lower definition are marked, so that the lens is prompted to be shot or processed again.
Optionally, after the step of calibrating the RGB three-channel color information of each frame of image according to the RGB three-channel color information of each lens background and the preset RGB three-channel color information, the method further includes the following steps:
acquiring scene information synthesized by each lens group;
extracting key frame images of the lens according to the scene information;
acquiring reference characteristics according to the key frame image;
acquiring a brightness difference value of reference features in two adjacent key frame images;
and comparing the brightness difference with a preset threshold, if the brightness difference is greater than the preset threshold, indicating that lens conversion occurs, and marking the position of the lens conversion.
By adopting the technical scheme, the positions of the scenes combined by the lenses, particularly the positions of the lens switching, need later-stage key inspection, the brightness difference value of the reference characteristics in the two adjacent key frame images is compared with the preset threshold value, and the positions of the lens switching are searched and marked simultaneously, so that the lenses are conveniently inspected at the later stage.
Optionally, the reference feature includes a background feature, and the background feature is a background image of a reference object in the frame image.
By adopting the technical scheme, the reference objects of each shot or scene may not be the same, but the color and the definition of the background of some images are uniform and easy to analyze, so that the background image of the reference object is selected to be beneficial to accurately analyzing the frame image.
Optionally, the step of extracting the key frame image includes:
selecting a first frame image of a lens as a first key frame;
sequentially calculating the frame number between the current lens residual frame image and the first key frame, and if the obtained frame number is greater than a preset threshold value, setting the frame as a second key frame;
and selecting the second key frame as a reference frame, sequentially calculating the frame number between the residual frame image of the current lens and the reference frame, if the obtained frame number is greater than a preset threshold value, setting the frame as a next key frame, and repeating the selection process of the reference frame until the current lens is finished.
By adopting the technical scheme, the key frames with unfixed number are selected according to the change degree of the lens content, and the calculation is relatively simple.
Optionally, after the step of extracting the key frame image, the method further includes the following steps:
collecting the motion amount of each key frame;
and if the motion amount is larger than a preset threshold value, rejecting the key frame.
By adopting the technical scheme, when a fast moving object exists in the lens, excessive key frames are easy to select, data redundancy is caused, repeated and redundant key frames are removed by comparing the motion amount of the key frames, and therefore the data redundancy condition is reduced.
In a second aspect, the present application provides a multi-lens switching display system, including: the first acquisition module is used for acquiring lens information shot by each camera;
the first extraction module is used for extracting a frame image in the lens information and extracting reference features in the frame image;
the first analysis module is used for establishing an RGB color space model and generating a preset RGB three-channel color value and an RGB three-channel color value of the reference feature;
the first comparison module is used for comparing the RGB three-channel color value of the reference feature with a preset RGB three-channel color value, and if the results are inconsistent, outputting a corresponding first control instruction;
the calibration module is used for receiving the first control instruction and adjusting the three-channel color value of the lens background to approach to a preset three-channel color value;
the second acquisition module is used for acquiring two frames of images before and after the lens switching position;
the second extraction module is used for extracting the associated reference features in the front frame image and the rear frame image;
a detection module for detecting the sharpness of the associated reference features;
the second comparison module is used for comparing the definition of the front and rear correlation reference characteristics, and outputting a corresponding second control instruction if the results are inconsistent;
the first marking module is used for receiving a second control instruction and marking the front frame image and the rear frame image;
the third acquisition module is used for acquiring scenes formed by combining all the lenses;
the third extraction module is used for extracting the key frame images of all the shots and the reference features in the key frame images;
the second analysis module is used for analyzing the brightness difference value of the reference features in the two adjacent key frame images;
the third comparison module is used for comparing the brightness difference value with a preset threshold value, and if the brightness difference value is larger than the preset threshold value, outputting a corresponding third control instruction; and the number of the first and second groups,
and the second marking module is used for receiving the third control instruction and marking the position of the lens conversion.
By adopting the technical scheme, the lens information shot by each camera is obtained, then the frame image in the lens information is extracted, and the reference object background image in the frame image is extracted. And establishing an RGB color space model by using a computer and positive DIT LUT software, comparing the RGB three-channel color value of the reference feature with a preset RGB three-channel color value, and if the result is inconsistent, adjusting the background color of the current lens to be similar to the preset color value by the monitor.
Acquiring two frames of images before and after the switching position of a lens, extracting the related reference object backgrounds in the two frames of images, intercepting the pixel units with the same range size of the related reference object backgrounds in the two frames of images before and after switching, acquiring the number of pixels in the pixel units and comparing the number of pixels, considering that the definition of the two frames of images is inconsistent if the number of pixels in the pixel units selected by the two frames of images before and after switching is inconsistent, and marking the two frames of images before and after.
The method comprises the steps of obtaining a scene formed by combining all lenses, extracting key frame images of all the lenses and reference object backgrounds in the key frame images, calculating brightness difference values of reference features in two adjacent key frame images by using a computer, comparing the brightness difference values with a preset threshold value by using a processor, and marking the conversion position of the lenses by using a monitor if the brightness difference values are larger than the preset threshold value.
Optionally, the third extraction module includes:
the sub-extraction module is used for selecting a first frame image of the lens as a first key frame;
the calculation module is used for calculating the frame number between the current lens residual frame image and the first key frame in sequence;
the first analysis submodule is used for analyzing the obtained frame number, if the obtained frame number is larger than a preset threshold value, setting the frame as a second key frame, selecting the second key frame as a reference frame, and repeating the calculation and comparison processes until the current shot is finished;
the acquisition module is used for acquiring the motion amount of each key frame; and the number of the first and second groups,
and the second analysis submodule is used for analyzing the motion amount, and if the motion amount is larger than a preset threshold value, the key frame is eliminated.
By adopting the technical scheme, when a fast moving object exists in the lens, excessive key frames are easy to select, data redundancy is caused, repeated and redundant key frames are removed by comparing the motion amount of the key frames, and therefore the data redundancy condition is reduced.
In a third aspect, the present application provides an intelligent terminal system, including a memory and a processor, where the memory stores thereon a computer program that can be loaded by the processor and execute any one of the above multiple lens switching methods.
In a fourth aspect, the present application provides a computer storage medium storing a computer program that can be loaded by a processor and execute any of the above-described multi-shot switching methods.
In summary, the technical scheme provided by the application has the following beneficial effects: through the established RGB color space model, the background color value of each lens is adjusted into a preset RGB three-channel color value, so that the colors of the video styles shot by each lens tend to be approximate, the definition of the frame images before and after the lens switching position is detected, the frame images with lower definition are marked, the position where the lens is switched is marked, and the subsequent checking and processing work is facilitated.
Drawings
FIG. 1 is a component structure of a video file;
FIG. 2 is a block diagram of a flow chart of a multi-shot switching display method according to the present application;
fig. 3 is a block flow diagram of a method for extracting key frames according to the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-2.
Referring to fig. 1, a video file is generally composed of a three-layer structure of scenes, shots, and frames. The bottom layer is a frame, the middle layer is a lens, and the top layer is a scene. When the video is processed, the video is firstly divided into shots, then frames are extracted from each shot, and each frame of image is processed respectively.
Referring to fig. 2, a multi-lens switching display method disclosed in the embodiment of the present application specifically includes the following steps:
s101, acquiring lens information shot by each camera, extracting frame images in the lens information, and acquiring reference characteristics according to the frame images.
The reference features comprise background features, and the background features are background images of the reference objects in the frame images. Specifically, a frame image in the shot is extracted by using a computer and a monitor, and a background image of a reference object in the frame image is selected, which may be a background image of a representative object.
S102, establishing an RGB color space model, determining RGB three-channel color values of the lens background according to the color of the reference object background image, and setting a plurality of groups of three-channel color values preset in style.
Specifically, a RGB color space model can be established by using a computer and a Zun just DIT LUT software, and a plurality of groups of three-channel color values in styles are preset in a Zun just monitor so as to adapt to the requirements of different scenes.
And S103, calibrating the RGB three-channel color values of the lens backgrounds according to the RGB three-channel color values of the lens backgrounds and the selected preset RGB three-channel color values.
For example, if the preset three-channel color value is (255,255,255) and the current three-channel color value of the shot is (150,255,150), the background color of the shot can be adjusted to approximate the preset color value, such as (235, 255,235), by using the monitor and the DIT LUT software, so as to reduce the color difference of the original shot background.
S104, acquiring front and rear frame images at the lens switching position, acquiring the associated reference characteristics according to the front and rear frame images, performing definition detection on the associated reference characteristics, comparing the definitions of the front and rear associated reference characteristics, and marking the front and rear frame images if the results are inconsistent.
Specifically, in real-time shooting, when an image shot by a next lens enters a monitor, pixel units with the same range size of two frames of images before and after switching are intercepted, the number of pixels in the pixel units is collected and compared, the number of pixels in the pixel units selected by the two frames of images before and after switching is inconsistent, the definition of the two frames of images before and after switching is considered inconsistent, and the two frames of images before and after switching are marked simultaneously.
And S105, acquiring scene information synthesized by all the lens groups, and extracting key frame images of the lens according to the scene information.
The key frames represent the main content of the shots and are also the core components in video retrieval. The key frames of a shot may contain only one frame or several frames, which can reflect the main changes and motions in the shot.
Referring to fig. 3, the specific steps of extracting the key frame are as follows:
and S' 111, selecting a first frame image of the shot as a first key frame, and sequentially calculating the frame number between the current residual frame image of the shot and the first key frame.
S' 112, comparing the frame number with a preset threshold, and if the obtained frame number is greater than the preset threshold, setting the frame as a second key frame; then, selecting a second key frame as a reference frame, and repeating the detection and calculation processes until the current shot is finished.
For example, a shot duration is 1 minute, 20 images per second, the preset threshold is 20 frames, the 1 st frame is taken as the first key frame, the 21 st frame is taken as the second key frame, 41 is taken as the third key frame, 61 frames are taken as the fourth key frame, and 81 frames … ….
When a fast moving object exists in the lens, too many key frames are easy to select, and data redundancy is caused, so that after the acquisition of the acquisition key frames is finished, the optical flow is analyzed by using a horns-Schunck algorithm, and the movement amount in the lens is calculated. The main process is to first calculate the optical flow for each pixel, then sum the modulo of the optical flow component for each pixel in each frame of image and take this value as the amount of motion for this frame.
And S' 113, calculating the motion amount of each key frame, comparing the motion amount with a preset threshold value, and if the motion amount is greater than the preset threshold value, rejecting the key frame.
And S106, acquiring reference characteristics according to the key frame images, acquiring a brightness difference value of the reference characteristics in two adjacent key frame images, comparing the brightness difference value with a preset threshold value, if the brightness difference value is greater than the preset threshold value, indicating that lens conversion occurs, and marking the position of the lens conversion.
Wherein, the reference feature can select a background image with a representative reference object.
Specifically, a correction monitor is used for obtaining a brightness difference value of a reference object background in two adjacent key frame images, the brightness difference value is compared with a preset threshold value, and if the brightness difference value is larger than the preset threshold value, the position is marked.
Specifically, the gray scale or brightness difference fd of the corresponding pixel point (i, j) between two adjacent frames is calculated first, where fd is | fn(i,j)-fn+1(i, j) |. Wherein f isn(i, j) and fn+1(i, j) represent the gray-scale values or luminance values of the pixels of the nth frame and the (n + 1) th frame, respectively. The total difference Fd between two adjacent frames is as follows:
Figure BDA0002854760210000151
where M is the length of the frame image and N is the width of the frame image.
And then comparing the frame difference Fd between the two frames with a set threshold value, and if the frame difference Fd is larger than the threshold value, performing abrupt shot transition.
The embodiment of the application further discloses a multi-lens switching display system, which specifically comprises a first acquisition module, a first extraction module, a first analysis module, a first comparison module, a calibration module, a second acquisition module, a second extraction module, a detection module, a second comparison module, a first marking module, a third acquisition module, a third extraction module, a third comparison module and a second marking module.
The first acquisition module is used for acquiring lens information shot by each camera;
the first extraction module is used for extracting a frame image in the lens information and extracting reference features in the frame image;
the first analysis module is used for establishing an RGB color space model and generating a preset RGB three-channel color value and an RGB three-channel color value of the reference feature;
the first comparison module is used for comparing the RGB three-channel color value of the reference feature with a preset RGB three-channel color value, and if the results are inconsistent, outputting a corresponding first control instruction;
the calibration module is used for receiving the first control instruction and adjusting the three-channel color value of the lens to approach to the preset three-channel color value;
the second acquisition module is used for acquiring two frames of images before and after the lens switching position;
the second extraction module is used for extracting the characteristics of the associated reference objects in the front frame image and the rear frame image;
a detection module for detecting the sharpness of the associated reference features;
the second comparison module is used for comparing the definition of the characteristics of the front and rear associated reference objects, and outputting a corresponding second control instruction if the results are inconsistent;
the first marking module is used for receiving a second control instruction and marking the front frame image and the rear frame image;
and the third acquisition module is used for acquiring the scene synthesized by all the lens groups.
And the third extraction module comprises a sub-extraction module, a calculation module and a first analysis sub-module, and the acquisition module is based on the second analysis sub-module.
The sub-extraction module is used for selecting a first frame image of the lens as a first key frame;
the calculation module is used for calculating the frame number between the current lens residual frame image and the first key frame in sequence;
the first analysis submodule is used for analyzing the obtained frame number, if the obtained frame number is larger than a preset threshold value, setting the frame as a second key frame, selecting the second key frame as a reference frame, and repeating the calculation and comparison processes until the current shot is finished;
the acquisition module is used for acquiring the motion amount of each key frame;
and the second analysis submodule is used for analyzing the motion amount, and if the motion amount is larger than a preset threshold value, the key frame is eliminated.
The second analysis module is used for analyzing the brightness difference value of the reference features in the two adjacent key frame images;
the third comparison module is used for comparing the brightness difference value with a preset threshold value, and if the brightness difference value is larger than the preset threshold value, outputting a corresponding third control instruction;
and the second marking module is used for receiving the third control instruction and marking the position of the lens conversion.
Specifically, lens information shot by each camera is obtained through the processor, then a frame image in the lens information is extracted, and a reference object background image in the frame image is extracted. The monitor adjusts the background color of the current lens to approach the preset color value by utilizing a computer and positive DIT LUT software to establish an RGB color space model and comparing the RGB three-channel color value of the reference feature with the preset RGB three-channel color value.
The method comprises the steps of obtaining two frames of images before and after a lens is switched through a processor, extracting associated reference object backgrounds in the two frames of images before and after the lens is switched, intercepting pixel units with the same range size of the associated reference object backgrounds in the two frames of images before and after the lens is switched, collecting the number of pixels in the pixel units and comparing the pixel units, judging that the definition of the two frames of images before and after is inconsistent if the number of pixels in the pixel units selected by the two frames of images before and after is inconsistent, and marking the two frames of images before and after.
The method comprises the steps of obtaining a scene formed by combining all lenses through a processor, extracting key frame images of all the lenses and reference object backgrounds in the key frame images, calculating brightness difference values of reference features in two adjacent key frame images through a computer, comparing the brightness difference values with a preset threshold value through the processor, and marking the conversion positions of the lenses through a monitor if the brightness difference values are larger than the preset threshold value.
When extracting the key frame, selecting a first frame image of the lens as the first key frame through the processor, sequentially calculating the frame number between the current lens residual frame image and the first key frame, and if the obtained frame number is greater than a preset threshold value, setting the frame as a second key frame; and then, selecting a second key frame as a reference frame, sequentially calculating the number of frames between the residual frame image of the current shot and the reference frame, if the obtained number of frames is greater than a preset threshold value, setting the frame as a next key frame, and repeating the selection process of the reference frame until the current shot is finished. Meanwhile, the motion amount of each key frame is calculated, and redundant key frames are removed.
The embodiment of the application also discloses an intelligent terminal system which comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and can execute the multi-lens switching display method.
The embodiment of the application also discloses a computer readable storage medium, which stores a computer program capable of being loaded by a processor and executing the multi-lens switching display method.
The computer-readable storage medium includes, for example: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The processor mentioned in any of the embodiments of the present application may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method for transmitting feedback information. The processing unit and the storage unit may be decoupled, and are respectively disposed on different physical devices, and are connected in a wired or wireless manner to implement respective functions of the processing unit and the storage unit, so as to support the system chip to implement various functions in the foregoing embodiments. Alternatively, the processing unit and the memory may be coupled to the same device.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (10)

1. A multi-lens switching display method is characterized by comprising the following steps:
acquiring lens information shot by each camera;
extracting a frame image in the lens information, and acquiring a reference characteristic according to the frame image;
establishing an RGB color space model;
generating preset RGB three-channel color information according to the RGB color space model;
determining RGB three-channel color information of the lens background according to the reference characteristics;
and calibrating the RGB three-channel color information of each frame of image according to the RGB three-channel color information of each lens background and the preset RGB three-channel color information.
2. The multi-shot switching display method according to claim 1, wherein after the step of calibrating the RGB three-channel color information of each frame image according to the RGB three-channel color information of each shot background and the preset RGB three-channel color information, further comprising the steps of:
acquiring front and rear frame images at a lens switching position;
acquiring relevant reference characteristics according to the front frame image and the rear frame image;
and performing definition detection on the associated reference features, comparing the definition of the associated reference features before and after, and marking the images of the two frames before and after if the results are inconsistent.
3. The multi-shot switching display method according to claim 1, wherein after the step of calibrating the RGB three-channel color information of each frame image according to the RGB three-channel color information of each shot background and the preset RGB three-channel color information, further comprising the steps of:
acquiring scene information synthesized by each lens group;
extracting key frame images of the lens according to the scene information;
acquiring reference characteristics according to the key frame image;
acquiring a brightness difference value of reference features in two adjacent key frame images;
and comparing the brightness difference with a preset threshold, if the brightness difference is greater than the preset threshold, indicating that lens conversion occurs, and marking the position of the lens conversion.
4. A multi-shot switching display method as claimed in claim 1, 2 or 3, wherein said reference feature comprises a background feature, and said background feature is a background image of a reference in the frame image.
5. The multi-shot switching display method as claimed in claim 3, wherein the step of extracting the key frame image comprises:
selecting a first frame image of a lens as a first key frame;
sequentially calculating the frame number between the current lens residual frame image and the first key frame, and if the obtained frame number is greater than a preset threshold value, setting the frame as a second key frame;
and selecting the second key frame as a reference frame, sequentially calculating the frame number between the residual frame image of the current lens and the reference frame, if the obtained frame number is greater than a preset threshold value, setting the frame as a next key frame, and repeating the selection process of the reference frame until the current lens is finished.
6. The method as claimed in claim 5, further comprising the following steps after the step of extracting the key frame image:
collecting the motion amount of each key frame;
and if the motion amount is larger than a preset threshold value, rejecting the key frame.
7. A multi-lens switching display system, comprising: the first acquisition module is used for acquiring lens information shot by each camera;
the first extraction module is used for extracting a frame image in the lens information and extracting reference features in the frame image;
the first analysis module is used for establishing an RGB color space model and generating a preset RGB three-channel color value and an RGB three-channel color value of the reference feature;
the first comparison module is used for comparing the RGB three-channel color value of the reference feature with a preset RGB three-channel color value, and if the results are inconsistent, outputting a corresponding first control instruction;
the calibration module is used for receiving the first control instruction and adjusting the three-channel color value of the lens background to approach to a preset three-channel color value;
the second acquisition module is used for acquiring two frames of images before and after the lens switching position;
the second extraction module is used for extracting the associated reference features in the front frame image and the rear frame image;
a detection module for detecting the sharpness of the associated reference features;
the second comparison module is used for comparing the definition of the front and rear correlation reference characteristics, and outputting a corresponding second control instruction if the results are inconsistent;
the first marking module is used for receiving a second control instruction and marking the front frame image and the rear frame image;
the third acquisition module is used for acquiring scenes formed by combining all the lenses;
the third extraction module is used for extracting the key frame images of all the shots and the reference features in the key frame images;
the second analysis module is used for analyzing the brightness difference value of the reference features in the two adjacent key frame images;
the third comparison module is used for comparing the brightness difference value with a preset threshold value, and if the brightness difference value is larger than the preset threshold value, outputting a corresponding third control instruction; and the number of the first and second groups,
and the second marking module is used for receiving the third control instruction and marking the position of the lens conversion.
8. The multi-lens switching display system according to claim 7, wherein the third extracting module comprises:
the sub-extraction module is used for selecting a first frame image of the lens as a first key frame;
the calculation module is used for calculating the frame number between the current lens residual frame image and the first key frame in sequence;
the first analysis submodule is used for analyzing the obtained frame number, if the obtained frame number is larger than a preset threshold value, setting the frame as a second key frame, selecting the second key frame as a reference frame, and repeating the calculation and comparison processes until the current shot is finished;
the acquisition module is used for acquiring the motion amount of each key frame; and the number of the first and second groups,
and the second analysis submodule is used for analyzing the motion amount, and if the motion amount is larger than a preset threshold value, the key frame is eliminated.
9. An intelligent terminal system, characterized in that the memory stores thereon a computer program that can be loaded by the processor and that executes the multi-shot display method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which executes a multi-shot display method as claimed in any one of claims 1 to 6.
CN202011541472.XA 2020-12-23 2020-12-23 Multi-lens switching display method and system, intelligent terminal and storage medium Pending CN112637573A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011541472.XA CN112637573A (en) 2020-12-23 2020-12-23 Multi-lens switching display method and system, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011541472.XA CN112637573A (en) 2020-12-23 2020-12-23 Multi-lens switching display method and system, intelligent terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112637573A true CN112637573A (en) 2021-04-09

Family

ID=75322000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011541472.XA Pending CN112637573A (en) 2020-12-23 2020-12-23 Multi-lens switching display method and system, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112637573A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268741A (en) * 2022-02-24 2022-04-01 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060116335A (en) * 2005-05-09 2006-11-15 삼성전자주식회사 Apparatus and method for summaring moving-picture using events, and compter-readable storage storing compter program controlling the apparatus
CN106331524A (en) * 2016-08-18 2017-01-11 无锡天脉聚源传媒科技有限公司 Method and device for recognizing shot cut
CN109474809A (en) * 2018-11-07 2019-03-15 深圳六滴科技有限公司 Chromatic aberration calibrating method, device, system, panorama camera and storage medium
CN110740378A (en) * 2019-09-05 2020-01-31 天脉聚源(杭州)传媒科技有限公司 Method, system, device and storage medium for identifying notice in videos

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060116335A (en) * 2005-05-09 2006-11-15 삼성전자주식회사 Apparatus and method for summaring moving-picture using events, and compter-readable storage storing compter program controlling the apparatus
CN106331524A (en) * 2016-08-18 2017-01-11 无锡天脉聚源传媒科技有限公司 Method and device for recognizing shot cut
CN109474809A (en) * 2018-11-07 2019-03-15 深圳六滴科技有限公司 Chromatic aberration calibrating method, device, system, panorama camera and storage medium
CN110740378A (en) * 2019-09-05 2020-01-31 天脉聚源(杭州)传媒科技有限公司 Method, system, device and storage medium for identifying notice in videos

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268741A (en) * 2022-02-24 2022-04-01 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
EP2987135B1 (en) Reference image selection for motion ghost filtering
US8180145B2 (en) Method for producing image with depth by using 2D images
US9530192B2 (en) Method for determining stereo quality score and automatically improving the quality of stereo images
US7599568B2 (en) Image processing method, apparatus, and program
JP4772839B2 (en) Image identification method and imaging apparatus
CN106030653B (en) For generating the image processing system and image processing method of high dynamic range images
KR101303877B1 (en) Method and apparatus for serving prefer color conversion of skin color applying face detection and skin area detection
US10515471B2 (en) Apparatus and method for generating best-view image centered on object of interest in multiple camera images
CN113992861B (en) Image processing method and image processing device
CN110490271B (en) Image matching and splicing method, device, system and readable medium
US20080192122A1 (en) Photographing apparatus, method and computer program product
CN110366001B (en) Method and device for determining video definition, storage medium and electronic device
CN102959942B (en) Image capture device for stereoscopic viewing-use and control method thereof
US20100054542A1 (en) Processing video frames with the same content but with luminance variations across frames
US20120106783A1 (en) Object tracking method
CN112261292B (en) Image acquisition method, terminal, chip and storage medium
US20080226159A1 (en) Method and System For Calculating Depth Information of Object in Image
EP3363193B1 (en) Device and method for reducing the set of exposure times for high dynamic range video imaging
CN110120012A (en) The video-splicing method that sync key frame based on binocular camera extracts
CN112637573A (en) Multi-lens switching display method and system, intelligent terminal and storage medium
JP2010020602A (en) Image matching device and camera
CN109360176A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN114358131A (en) Digital photo frame intelligent photo optimization processing system
JP2007048108A (en) Image evaluation system, image evaluation method and image evaluation program
JP4831344B2 (en) Eye position detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409