CN104301596B - A kind of method for processing video frequency and device - Google Patents
A kind of method for processing video frequency and device Download PDFInfo
- Publication number
- CN104301596B CN104301596B CN201310292305.XA CN201310292305A CN104301596B CN 104301596 B CN104301596 B CN 104301596B CN 201310292305 A CN201310292305 A CN 201310292305A CN 104301596 B CN104301596 B CN 104301596B
- Authority
- CN
- China
- Prior art keywords
- target area
- video image
- image
- processing
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Studio Devices (AREA)
Abstract
The invention discloses a kind of method for processing video frequency and devices, for solving the problem of to fail in the shooting process of existing video image to obtain the target image in taken video image in real time.The method of the embodiment of the present invention includes:The video image obtained from the external world is received, determines the target area in video image;The every frame video image received after being determined for target area is cut out processing and correction process to the frame video image, obtains the image in the frame video image in the target area and output according to the parameter information of the target area.Due to the image in only being exported per frame video image using the method for the embodiment of the present invention in target area, to improve user experience, reduce the processing of later stage needs.
Description
Technical field
The present invention relates to multimedia video technical field, more particularly to a kind of method for processing video frequency and device.
Background technology
Current video capture equipment, typically by optical zoom will need the image scene that shoots be amplified and
It reduces, and the quality of the image to taking(Such as color, exposure, white balance)It is adjusted, to obtain required video.
In terms of the contents processing of video image, current video capture equipment, all only the image taken is made etc.
Proportional zoom, however during video capture, we only need the specific region in the video image that concern takes sometimes
Interior image, and the image in other regions in the video image(Such as scene image)It is not that we are of interest.Due to
Current video capture equipment lacks the intellectual analysis to scene, segmentation and processing, includes not only in the video image taken
Target image of interest also includes the scene image that need not be paid close attention to so that in the image taken, target image is generally all
It is not that the best effects that can cover visual field also reduce user experience to increase the complexity of post-production.
For example, recording PPT in meeting-place(PowerPoint)When speech content, it is of interest that the PPT's that speaker plays is interior
Hold.In shooting process, if video capture equipment is not located at the middle position of the positions PPT, the video figure that takes
Also include the scene image that need not be paid close attention to so that the PPT images taken as in other than comprising PPT images of interest
General is not the best effects that can cover visual field.It is similar, shooting legitimate drama etc. these with specific region content
When scene, all there is the above problem.
In conclusion in the shooting process of video image, fail to obtain the mesh in taken video image in real time
Logo image.
Invention content
An embodiment of the present invention provides a kind of method for processing video frequency and devices, for solving the shooting in existing video image
In the process, the problem of failing to obtain the target image in taken video image in real time.
An embodiment of the present invention provides a kind of method for processing video frequency, this method includes:
The video image obtained from the external world is received, determines the target area in the video image;
The every frame video image received after being determined for the target area is believed according to the parameter of the target area
Breath, is cut out processing to the frame video image, obtains the image in target area described in the frame video image and output.
An embodiment of the present invention provides a kind of video process apparatus, which includes:
Target area determination unit determines the mesh in the video image for receiving the video image obtained from the external world
Mark region;
Processing unit, every frame video image for being received after being determined for the target area, according to the target
The parameter information in region is cut out processing to the frame video image, obtains in target area described in the frame video image
Image simultaneously exports.
In the embodiment of the present invention, the video image obtained from the external world is received, determines the target in the video image received
Region;The every frame video image received after being determined for target area regards the frame according to the parameter information of the target area
Frequency image is cut out processing, obtains the image in the frame video image in the target area and output.Due to only exporting every frame
Image in target in video image region reduces the processing of later stage needs to improve user experience.
Description of the drawings
Fig. 1 is a kind of method for processing video frequency flow diagram provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of determining video image target domain mode provided in an embodiment of the present invention;
Fig. 3 A~Fig. 3 E are the schematic diagram provided in an embodiment of the present invention that video processing is carried out by taking PPT photographed scenes as an example;
Fig. 4 is another method for processing video frequency flow diagram provided in an embodiment of the present invention;
Fig. 5 is a kind of flow diagram carrying out mobile detection provided in an embodiment of the present invention;
Fig. 6 is the time shaft schematic diagram of video processing procedure provided in an embodiment of the present invention;
Fig. 7 is a kind of structural schematic diagram of video process apparatus provided in an embodiment of the present invention;
Fig. 8 is the structural schematic diagram of another video process apparatus provided in an embodiment of the present invention;
Fig. 9 is the flow of data stream and processing schematic diagram of device shown in Fig. 8.
Specific implementation mode
The embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
It is shown in Figure 1, a kind of method for processing video frequency provided in an embodiment of the present invention, including:
Step 11 receives the video image obtained from the external world, determines the target area in the video image received;
Step 12, the every frame video image received after being determined for target area, believe according to the parameter of the target area
Breath, is cut out processing to the frame video image, obtains the image in the frame video image in the target area and output.
The target area of the embodiment of the present invention is the subregion in video image, and the image in the subregion is user
Interested content;And the target area that the embodiment of the present invention is determined is the quadrilateral area being made of target line.
In the embodiment of the present invention, the target area in the video image received is first determined, be then directed to subsequently received
Every frame video image processing is cut out to every frame video image according to the parameter information of the target area, it is every to obtain
Image in frame video image in the target area, and export the image in every frame video image in the target area.Using this
The method of inventive embodiments, the image in only exporting per frame video image in target area, improves user experience, after reducing
The processing that phase needs.
In force, in step 11, the target area in the video image received, including following two modes are determined:
Mode A, when receiving video image, according to the algorithm of setting, automatically determine the target area in video image,
It is shown in Figure 2, specifically include following steps:
Step 21 carries out edge detection to video image, obtains the marginal information in each region in the video image;
Preferably, this step may be used Sobel algorithms, Canny algorithms, Roberts algorithms, Prewitt algorithms,
Krisch algorithms etc. carry out edge detection to video image.
The marginal information that step 22, basis obtain carries out linear search processing to the video image;
Preferably, Hough transform algorithm, Freeman algorithms may be used in this step, and PCA-HT algorithms etc. are to video image
Carry out linear search processing.
Step 23, from the straight line searched out, determine at least three target lines;And
The target line that step 24, basis are determined, determines the target area of video image.
Further, in step 23, from the straight line searched out, at least three target lines is determined, are specifically included:
From all straight lines searched out, the straight line that can form corner is determined, and calculate the straight line to form corner
Between intersection point;And
Calculated all intersection points are grouped according to region, at least one set of intersection point, are selected and the video figure
The central point of picture is apart from maximum intersection point, and using two straight lines where the intersection point as target line, wherein video image with
Horizontal line and vertical line where center point are divided into four regions.
In force, after carrying out linear search processing to video image in step 22, the straight line searched out is likely to be disconnected
Short straight line that is continuous and not forming corner, needs to extend those short straight lines to both sides, to determine the short straight line after extending
Whether corner can be formed.
It should be noted that due in captured scene, the shape in the interested region of user is possible to different, to
The quantity for the intersection point determined according to the video image taken is possible to different, and then the quantity for the target line determined has
It may be different.For example, in scene using PPT speeches, the interested region of user is the content of the PPT, and under the scene, sense is emerging
The region of interest is that rectangle, for every group of intersection point, from this group of intersection point, is selected and the video according to the video image taken
The central point of image is apart from maximum intersection point, so as to determine four intersection points, and then four target lines determined, it should
The quadrangle that four target rectilinear frames come out is the target area of determination;
For another example, in legitimate drama scene, the interested region of user is the region in the stage, and the top of stage is generally
Dome-shaped, other three sides are straight line, under the scene, interested region be one side be curve and other three sides be straight line envelope
Figure is closed, according to the video image taken, determines three target lines(That is bottom edge and two sides), according to what is determined
Three target lines and the display scale of setting, are capable of determining that another target line, which comes out
Quadrangle be determining target area.Preferably, the display scale of setting can be 16:9 or 4:3 equal common display ratios
Example.
As a kind of realization method, if determining three target lines in step 23, basis is determined in step 24
Target line determines target area, including:
From three target lines determined, selecting two, only there are one the target lines of intersection point for tool;
According to the display scale of setting, the endpoint for two target lines selected is determined respectively;
According to the endpoint for two target lines determined, determine simultaneously by the straight line of two endpoints, and will
The straight line determined is as Article 4 target line;And
The quadrangle that determine four target lines are surrounded is as target area.
Specifically, from three target lines determined, selecting two, only there are one the target line of intersection point, notes for tool
For L1 and L2;According to the display scale of setting(Such as 16:9 or 4:3)And there are two the target lines of intersection point for a tool(It is denoted as
L3)Length, the length of L1 and L2 are capable of determining that, so as to determine the endpoint of L1 and L2;According to the end of determining L1
The endpoint of point and L2 is capable of determining that straight line, and using the straight line determined as Article 4 target line;Four targets
Straight line can surround a quadrangle, i.e. target area.
As another way of realization, if determining four target lines in step 23, basis is determined in step 24
Target line, determine target area, including:
The quadrangle that determine four target lines are surrounded is as target area.
In force, in step 21, edge detection is carried out to video image, is specifically included:
Binary conversion treatment is carried out to video image and is filtered, removes the interference information in the video image, and to place
Video image after reason carries out edge detection.
Mode B, when receiving video image, according to the instruction order of user, determine target area;It is specific as follows:
The instruction order for the target area in designated image is received, and target is determined according to the instruction order
Region.
Specifically, user by key mode or can touch click mode in video image according to their needs
Select target area.
In force, it in step 11, is receiving after the video image that the external world obtains, is determining the video image received
In target area before, further include:
Control and focusing control are exposed to the video image obtained from the external world.Wherein, it focuses to video image
Control, can ensure that the image of target area is most clearly;Control is exposed to video image, can ensure target area
Exposure it is normal, be not in too bright or too dark situation, interfered so as to avoid the brightness in other regions.
Preferably, the middle section of the video image only to receiving is exposed control and focusing control;
Wherein, the middle section of video image refers to the video image to be divided into 3 × 3 square, bosom that
A square is middle section.
It should be noted that due to having not determined out target area at this time, target area is first set in the video image
Middle section, therefore, the middle section of video image that can be only to receiving are exposed control and focusing control, to ensure
The clarity and normal exposure of the image of target area.
In force, it is preferred that after step 11, before being cut out processing to the frame video image in step 12,
This method further includes:
It is adjusted according to the size of the display scale of setting, the target area to determining, and by the target after adjustment
Region is as final target area.
Preferably, the display scale of setting can be 16:9 or 4:The common display scales such as 3.
In force, step 12 specifically includes:
The every frame video image received after being determined for target area, according to the parameter information of the target area and setting
Display scale, processing and correction process are cut out to the image in the target area in the frame video image.
The problem of due to shooting angle, the target area determined are possible to not be rectangular shape, it is therefore desirable to the video
Image in objective area in image carries out correction process, obtains the image of rectangular shape and output.
Preferably, correction process is carried out according to the image in target area in a pair of frame video image of formula:
... formula one;
Wherein, x ', y ' are the coordinate value after the image flame detection in the frame video image in target area, and x, y are that the frame regards
The coordinate value of image in frequency objective area in image, C1~C8For known parameter value.
Specifically, the embodiment of the present invention carries out correction by the way of inverse transformation to the image in target area, that is,
It says and first expands a piece of space for keeping for output image, for each pixel of output, search point best in original image,
The relationship between two figure of image after original image and correction is simulated by above-mentioned Bilinear Equations.
It should be noted that c1~c8After the coordinate value of four corners of the original image in target area and correction
The image four corners coordinate value(The coordinate value of four corners of the output image set)It determines, due to sharing 8
Group correspondence, therefore c can be solved1~c8The value of this 8 parameters.Preferably, in order to obtain better display effect, for
Non integer value in above-mentioned 8 groups of correspondences carries out weight calculation.
Further, the target area determined is adjusted in order to facilitate user, in step 11, is determining target area
After domain, before being cut out processing to the frame video image, this method further includes:
It by the parameter information of determining target area, is added in video image and is shown, so that in video image
It can show that determining target area;
If receiving adjust instruction, the size of target area is adjusted according to the adjust instruction, by the target area after adjustment
As final target area;
If not receiving adjust instruction, using determining target area as final target area.
Correspondingly, the every frame video image received after being determined for final target area, according to the final target
The parameter information in region is cut out processing to the frame video image, obtains the final target area of this in the frame video image
Interior image and output.
Preferably, user can by point touching screen, click keys or other man-machine interaction modes to target area into
Row adjustment.
In order to reduce power consumption and avoid the interference of certain scenes, it is preferred that this method further includes:
Oscillation Amplitude is obtained in real time to be handled as follows for the Oscillation Amplitude currently got:
If the Oscillation Amplitude currently got is more than the first threshold of setting, target area is redefined;
If the Oscillation Amplitude currently got is more than the second threshold of setting and is not more than first threshold, carry out at stabilization
Reason;
Wherein, first threshold and second threshold are empirical value, and first threshold is more than second threshold, first threshold and the
Two threshold values can be set according to actual photographed scene.
Specifically, if the Oscillation Amplitude currently got is more than the first threshold of setting, it is believed that sent out camera site
It has given birth to large change or photographed scene has been changed, it is therefore desirable to redefine target area;If what is currently got shakes
Dynamic amplitude is more than the second threshold of setting and is not more than first threshold, it is believed that and it is that small shake has occurred in filming apparatus,
Therefore it only needs to carry out stabilization processing, without redefining target area.
Further, if the Oscillation Amplitude currently got is not more than second threshold, any operation is not executed.
In force, the realization pair of the detection devices such as acceleration transducer, displacement sensor may be used in the embodiment of the present invention
The detection of Oscillation Amplitude, by taking acceleration transducer as an example, including following processing procedure:
Three reference axis of real-time query acceleration transducer(Three reference axis i.e. on three dimensions)On coordinate value;
When the changing value of at least one coordinate value is more than first threshold, target area is redefined, that is, executes step 11
~step 12;
When the changing value of at least one coordinate value is more than second threshold and is not more than first threshold, stabilization processing is carried out.
Further, when the changing value of three coordinate values detected is not more than second threshold, any operation is not executed.
Further, in shooting process, it is also necessary to the recording of audio is carried out at the same time, specifically, being received by microphone outer
The audio-frequency information on boundary, and the audio-frequency information received is subjected to coded treatment, thus during video capture, by extraneous sound
Frequency is recorded simultaneously.
The method for processing video frequency of the embodiment of the present invention is described in detail with reference to following specific examples.
Embodiment one, by taking the scene of PPT shootings as an example, the method for processing video frequency of the embodiment of the present invention is carried out specifically
Bright, other photographed scenes are similar, no longer illustrate one by one herein.
Due to the interested region of user(That is the position where PPT)Come relative to other regions in the brightness of shooting process
Saying can be brighter, shown in Fig. 3 A, processing is optimized first against the video image taken, by the interested area of user
Domain is separated from video image, specifically:It is handled using medium filtering, in order to eliminate interference, to video image elder generation
Carry out a corrosion treatment(Erode algorithms), then carry out expansion process twice(Dilate algorithms), primary corruption is finally carried out again
Erosion is handled, and obtains binary image as shown in Figure 3B, to all shield edge unrelated with area-of-interest in video image
It covers.
Secondly, to treated, binary image carries out edge detection(The present embodiment carries out edge inspection using Canny algorithms
It surveys), the edge detection in the binary image is come out, video image as shown in Figure 3 C is obtained.
Again, according to obtained marginal information, linear search is carried out(The present embodiment carries out straight line using Hough transformation and searches
Rope), search out the straight line come and indicated with thicker line in figure, the straight line that can form corner is done operation, calculates these straight lines
Intersection point, thus obtained the intersection point of four corners of be likely to become target area, as shown in Figure 3 C, wherein search
The edge line of target area out is likely to be the short straight line for not forming corner, such as the thick straight line in Fig. 3 C, at this time, it may be necessary to
Each short straight line is extended to both sides, to determine whether the short straight line after extending forms corner;
Then, to each region in the video image(Using the center line of video image as boundary, which is divided into four
A region)Interior intersection point is judged, the farthest intersection point in the center of the distance video image in each region is chosen to be
Best Point(That is the intersection point of the corner of target area), so that it is determined that going out the intersection point of four corners of target area, so that it is determined that going out
The four edges line of target area(That is target line), by above method, just find to outline from the straight line searched out and come most
The quadrangle of optimization is as target area, as shown in Figure 3D.
When carrying out linear search, can be influenced by lens distortion, it is preferred that if using wide-angle lens when shooting,
The amendment of lens distortion is first carried out, then carries out linear search, to reduce the difficulty of search.
Although have passed through threshold process(That is median filter process), still will appear some unwanted straight lines, need pair
The straight line searched is deleted, to delete the part straight line unrelated with target area.These unrelated line segments
Include mainly:The straight line and short straight line in image center;
Specifically, for the line segment in image center, it can be by judging at a distance from the central point of the video image, example
Such as, if certain straight line is less than the distance threshold of setting to the distance of central point, it is determined that the straight line is the straight line in image center;If
The distance of the straight line to central point is not less than the distance threshold, it is determined that the straight line is not the straight line in image center;
For short straight line, it can be judged by the length of the straight line, if for example, the length of certain straight line is less than setting
Length threshold, it is determined that the straight line is short straight line;If the length of the straight line is not less than the length threshold, it is determined that the straight line is not
Short straight line.
Preferably, according to the principle of setting, the straight line searched is deleted, delete image center straight line and/
Or short straight line.
Preferably, in order to find four best corners, the straight line searched is grouped, is divided into up and down four
Group.
Finally, according to the display scale of setting, to the image in target area in subsequently received every frame video image
It is cut out processing and correction process, to get the image in the area-of-interest of rectangular shape, as shown in FIGURE 3 E, Fig. 3 E
In(X, y)For the coordinate value of each pixel of the image in target area in every frame video image, using a pair of frame of formula
After image in target in video image region carries out correction process, the coordinate value of each pixel is(X ', y ').
Further, the image in the area-of-interest of the rectangular shape got is output to rear end and is handled, main to wrap
It includes:The processing such as image coding, image transmitting.
Embodiment two present embodiments provides a preferred video processing procedure, shown in Figure 4, including following step
Suddenly:
The video image that step 41, basis receive, determines the parameter information of target area;
Step 42, the parameter information of determining target area is added in the every frame video image being currently received carries out
Display;
Step 43, user are according to display as a result, judging whether to need to be adjusted;
If so, executing step 44;
If it is not, executing step 45;
Step 44, the adjust instruction for receiving user, and according to the adjust instruction, the size of target area is adjusted,
And execute step 45;
Step 45, user determine whether for target area;
If so, executing step 46;
If it is not, return to step 41;
Step 46, configuration figure correct parameter, and export the image after the correction in every frame video image in target area;
Specifically, according to target area(If being adjusted, for the target area after adjustment;If not being adjusted,
For originally determined target area)Four corners coordinate value and setting display scale, configuration figure correct parameter.
Step 47 carries out video recording process of subsequently making video recording.
Embodiment three present embodiments provides a kind of process of mobile detection, and acceleration transducer is used in the present embodiment
It is detected, it is shown in Figure 5, including:
Step 51 starts mobile detection;
The data that step 52, real-time query acceleration transducer provide(That is tri- coordinates of X, Y, Z of the acceleration transducer
The coordinate value of axis);
The triaxial coordinate value that step 53, basis inquire, judges whether itself moves;
If(The changing value of coordinate value in i.e. at least one reference axis is more than second threshold), execute step 54;
If not(That is the coordinate value in three reference axis is no more than second threshold), return to step 52;
Step 54 judges whether it is small shake;
If(The changing value of coordinate value in i.e. at least one reference axis is more than second threshold and is not more than first threshold),
Execute step 55;
If not(The changing value of coordinate value in i.e. at least one reference axis is more than first threshold), execute step 56;
Step 55 carries out stabilization processing;
Step 56, the detection for restarting target area.
It is shown in Figure 6 during video record, it is the determination stage of target area in the time of 1~L of video frame,
It is the stabilization stabilization sub stage in the time of video frame L~M, shake and mobile detection are realized using acceleration transducer, if
In video frame M, determine that bigger variation occurs in the coordinate value of any reference axis, then it is assumed that photographed scene is changed, and is needed
Redefine target area, video frame M~N on corresponding time shaft.
Above method process flow can realize that the software program can be stored in a storage medium with software program, when
When the software program of storage is called, above method step is executed.
Based on same inventive concept, a kind of video process apparatus is additionally provided in the embodiment of the present invention, due to the device solution
Certainly the principle of problem is similar to above-mentioned method for processing video frequency, therefore the implementation of the device may refer to the implementation of method, repetition
Place repeats no more.
It is shown in Figure 7, a kind of video process apparatus provided in an embodiment of the present invention, including:
Target area determination unit 71 determines the target in video image for receiving the video image obtained from the external world
Region;
Processing unit 72, every frame video image for being received after being determined for target area, according to target area
Parameter information is cut out processing to the frame video image, obtains the image in the frame video image in the target area and defeated
Go out.
In force, target area determination unit 71 includes target area identification module 711, wherein:
Target area identification module 711 is used to carry out edge detection to video image, obtains each region in video image
Marginal information;According to obtained marginal information, linear search processing is carried out to video image;From the straight line searched out, determine
Go out at least three target lines;And according to the target line determined, determine target area;Alternatively,
The instruction order for specifying the target area in the video image is received, mesh is determined according to the instruction order
Mark region.
Further, target area identification module 711 carries out edge detection according to following steps to video image:
Binary conversion treatment is carried out to video image and is filtered, removes the interference information in video image, and to processing
Video image afterwards carries out edge detection.
Further, target area identification module 711 determines at least three according to following steps from the straight line searched out
Target line:
From all straight lines searched out, the straight line that can form corner is determined, and calculate the straight line to form corner
Between intersection point;And be grouped calculated all intersection points according to region, at least one set of intersection point, selects and regard
The central point of frequency image is apart from maximum intersection point, and using two straight lines where the intersection point as target line, wherein video figure
As with where center point horizontal line and vertical line be divided into four regions.
Preferably, target area determination unit 71 includes image processing module 712, wherein:
Image processing module 712 is used to receive after the video image that the external world obtains, and determines the mesh in video image
Before marking region, control is exposed to video image and focusing controls.
In force, it is preferred that target area identification module 711 is additionally operable to:
According to the display scale of setting, the size of target area is adjusted, and using the target area after adjustment as
Final target area.
In force, it is preferred that target area identification module 711 is additionally operable to:
By the parameter information of determining target area, it is added in video image and is shown;
If receiving adjust instruction, the size of target area is adjusted according to adjust instruction, and the target area after adjustment is made
For final target area;
If not receiving adjust instruction, using determining target area as final target area.
In force, processing unit 72 is specifically used for:
According to the parameter information of target area and the display scale of setting, to the figure in target area in the frame video image
As being cut out processing and correction process.
In force, in order to reduce power consumption and the interference of certain scenes is avoided, it is preferred that the device further includes:
Mobile detection unit 73, for the Oscillation Amplitude currently got, carries out as follows for obtaining Oscillation Amplitude in real time
Processing:If the Oscillation Amplitude currently got is more than the first threshold of setting, target area is redefined;If currently getting
Oscillation Amplitude is more than the second threshold of setting and is not more than first threshold, carries out stabilization processing;Wherein, first threshold is more than second
Threshold value.
In force, in order to realize that image is synchronous with sound, it is preferred that the device further includes:
Audio coding unit carries out coded treatment for receiving extraneous audio-frequency information, and by the audio-frequency information received,
To which during video capture, extraneous audio be recorded simultaneously.
A kind of preferred hardware implementation mode of the embodiment of the present invention is given below, wherein target area determination unit 71
And the function of processing unit 72 is completed by processor, the function of mobile detection unit 73 is completed by acceleration transducer, and audio is compiled
The function of code unit can be completed by audio coder.
Video process apparatus provided in an embodiment of the present invention can be applied to DV, mobile phone, PAD, etc. have make video recording video recording work(
In the electronic equipment of energy.
With reference to preferred embodiment, video process apparatus provided by the invention is described in detail.
Shown in Figure 8, video process apparatus provided in this embodiment includes:
Image processing module 81, for optimizing processing to the image that imaging sensor obtains;Wherein, optimization processing packet
It includes but is not limited to one or more in following manner:
Processing focusing control, white balance control, spectrum assignment, contrast enhancing, color adjustment, camera lens correction, image are made an uproar
Point processing, image border enhancing and color space conversion.
Target area identification module 82, the marginal information for isolating target area determine the shape of target area, look into
Find out best composition quadrangle.
It should be noted that the work(that the image processing module 81 and target area identification module 82 in the present embodiment are realized
Can, respectively in embodiment shown in Fig. 7 71 image processing module 711 of target area determination unit and target area identify mould
The function that block 711 is realized is identical.
Target area figure correction is right after the parameter information for obtaining target area with display scale correction module 83
Subsequently received every frame video image is cut out processing and correction recovery processing, obtains the target area output of rectangular shape
Image, and be transferred to memory module 87 and stored;
Wherein, exporting the final size of image can be set, if setting final output image size, target
Regional graphics are corrected and display scale correction module can zoom in and out and export to image according to the ratio of setting.
Back end processing module 84 exports image for obtaining target area from memory module 87, is exported to target area
Image carries out coded treatment, and image transmitting is stored to memory module 87 by treated.
It should be noted that the target area figure correction and display scale correction module 83 in the present embodiment and rear end
The function that reason module 84 is realized is identical as the function that the processing unit 72 in embodiment shown in Fig. 7 is realized.
Mobile detection module 85, the detection for carrying out itself movement are carried out when it is small shake to determine movement at stabilization
Reason;When determining the mobile transformation for photographed scene, triggering target area identification module 82 re-recognizes target area;It is determining not
When occurring mobile, operation is not executed, to improve the reliability to scene change detection and carry out stabilization processing.
It should be noted that the function that the mobile detection module 85 in the present embodiment is realized, with embodiment shown in Fig. 7
In the function realized of mobile detection unit 73 it is identical.
Audio coding module 86, for carrying out coded treatment to the extraneous audio-frequency information received, and by treated
Audio-frequency information is stored in memory module 87, to which extraneous synchronous sound is recorded.
It should be noted that embodiment shown in Fig. 8 and embodiment shown in Fig. 7 are to each module of video process apparatus
Divide different, above-mentioned two embodiment is simply to illustrate that function achieved by video process apparatus, not to video processing
The restriction that the module of device divides, those skilled in the art can carry out according to the function that the video process apparatus can be realized
The division of module.
In embodiment shown in Fig. 8, the processing procedure of data flow can be found in shown in Fig. 9, and top half is video figure in Fig. 9
The processing procedure of picture, including image acquisition procedures, the optimization processing to the video image received(Image processing part i.e. in figure
Point)Process, the identification process of target area(Target area identification division i.e. in figure), the determination process of target area(Scheme
In target area setting section, including automatically determine and determine two ways with user)And the image flame detection mistake of target area
Journey(Target area image correction section i.e. in figure);Lower half portion is audio processing process in Fig. 9, to the audio number received
According to progress coded treatment(Audio coding part i.e. in figure).
A kind of preferred hardware implementation mode of the present embodiment is given below, at video provided in an embodiment of the present invention
Reason device needs to handle vision signal in real time, relatively high to data bandwidth and requirement of real-time, and therefore, the present invention is real
The video process apparatus for applying example offer can be monolithic SoC(System on Chip, system level chip, also referred to as system on chip)
Or FPGA(Field Programmable Gate Array, field programmable gate array)Circuit, wherein SoC the or FPGA energy
Enough realize target area determination unit 71, processing unit 72 and the mobile inspection of video process apparatus provided in an embodiment of the present invention
Survey the function of unit 73.The embodiment of the present invention is not to the concrete structure of SoC or FPGA(Such as logic circuit)It is defined, it is every
The hardware configuration that can realize the SoC or FPGA of the function of each module in the video process apparatus of the embodiment of the present invention, is covered by
In the embodiment of the present invention.
By taking video process apparatus shown in Fig. 8 as an example, image processing module 81, target area identification module in the device
82, the correction of target area figure and display scale correction module 83, back end processing module 84, mobile detection module 85, audio are compiled
Code module 86 and memory 87(Including memory interface)Function can be by being realized on monolithic SoC or FPGA.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, the present invention can be used in one or more wherein include computer usable program code computer
Usable storage medium(Including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)The computer program of upper implementation produces
The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, equipment(System)And the flow of computer program product
Figure and/or block diagram describe.It should be understood that can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
Instruct the processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine so that the instruction executed by computer or the processor of other programmable data processing devices is generated for real
The device for the function of being specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that instruction generation stored in the computer readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device so that count
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, in computer or
The instruction executed on other programmable devices is provided for realizing in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
God and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (13)
1. a kind of method for processing video frequency, which is characterized in that this method includes:
The video image obtained from the external world is received, determines the target area in the video image;
The every frame video image received after being determined for the target area, it is right according to the parameter information of the target area
The frame video image is cut out processing, obtains the image in target area described in the frame video image and output;
It is receiving after the video image that the external world obtains, before determining the target area in the video image, is further including:
Control and focusing control, the center of the video image are exposed to the middle section of the video image received
Region refers to the video image to be divided into 3 × 3 square, and the square in bosom is middle section;
After determining the target area, and before being cut out processing to the frame video image, further include:
According to the display scale of setting, the size of the target area is adjusted, and using the target area after adjustment as
Final target area;
The every frame video image received after being determined for the target area, it is right according to the parameter information of the target area
The frame video image is cut out processing, specifically includes:
According to the parameter information of the target area and the display scale of setting, in target area described in the frame video image
Image be cut out processing and correction process.
2. the method as described in claim 1, which is characterized in that determine the target area in the video image, specifically include:
Edge detection is carried out to the video image, obtains the marginal information in each region in the video image;
According to obtained marginal information, linear search processing is carried out to the video image;
From the straight line searched out, at least three target lines are determined;And
According to the target line determined, the target area is determined.
3. method as claimed in claim 2, which is characterized in that carry out edge detection to the video image, specifically include:
Binary conversion treatment is carried out to the video image and is filtered, removes the interference information in the video image, and right
Treated, and video image carries out edge detection.
4. method as claimed in claim 2, which is characterized in that from the straight line searched out, determine that at least three targets are straight
Line specifically includes:
From all straight lines searched out, the straight line that can form corner is determined, and calculate and to be formed between the straight line of corner
Intersection point;
Calculated all intersection points are grouped according to region, at least one set of intersection point, are selected and the video image
Central point apart from maximum intersection point, and using two straight lines where the intersection point as target line, wherein the video image
With where center point horizontal line and vertical line be divided into four regions.
5. the method as described in claim 1, which is characterized in that determine the target area in the video image, specifically include:
The instruction order for specifying the target area in the video image is received, and mesh is determined according to instruction order
Mark region.
6. such as Claims 1 to 5 any one of them method, which is characterized in that after determining the target area, to this
Frame video image is cut out before processing, further includes:
By the parameter information of the determining target area, it is added in the video image and is shown;
If receiving adjust instruction, the size of the target area is adjusted according to the adjust instruction, by the target area after adjustment
Domain is as final target area;
If not receiving adjust instruction, using the determining target area as final target area.
7. such as Claims 1 to 5 any one of them method, which is characterized in that the method further includes:
Oscillation Amplitude is obtained in real time to be handled as follows for the Oscillation Amplitude currently got:
If the Oscillation Amplitude currently got is more than the first threshold of setting, the target area is redefined;
If the Oscillation Amplitude currently got is more than the second threshold of setting and is not more than the first threshold, carry out at stabilization
Reason;
Wherein, the first threshold is more than the second threshold.
8. a kind of video process apparatus, which is characterized in that the device includes:
Target area determination unit determines the target area in the video image for receiving the video image obtained from the external world
Domain;
Processing unit, every frame video image for being received after being determined for the target area, according to the target area
Parameter information, processing is cut out to the frame video image, obtains the image in target area described in the frame video image
And it exports;
Wherein, the target area determination unit includes image processing module, and described image processing module is used for:It is receiving from outer
After the video image that boundary obtains, before determining the target area in the video image, to the video image received
Middle section be exposed control and focusing control, the middle section of the video image refers to the video image to divide
Square for 3 × 3 square, bosom is middle section;
The target area determination unit includes target area identification module, and the target area identification module is used for:
According to the display scale of setting, the size of the target area is adjusted, and using the target area after adjustment as
Final target area;
The processing unit is specifically used for:
According to the parameter information of the target area and the display scale of setting, in target area described in the frame video image
Image be cut out processing and correction process.
9. device as claimed in claim 8, which is characterized in that the target area identification module is used for:To the video figure
As carrying out edge detection, the marginal information in each region in the video image is obtained;According to obtained marginal information, regarded to described
Frequency image carries out linear search processing;From the straight line searched out, at least three target lines are determined;And according to determining
Target line, determine the target area;Or
The instruction order for specifying the target area in the video image is received, target is determined according to instruction order
Region.
10. device as claimed in claim 9, which is characterized in that the target area identification module is according to following steps to institute
It states video image and carries out edge detection:
Binary conversion treatment is carried out to the video image and is filtered, removes the interference information in the video image, and right
Treated, and video image carries out edge detection.
11. device as claimed in claim 9, which is characterized in that the target area identification module is according to following steps from searching
In the straight line that rope goes out, at least three target lines are determined:
From all straight lines searched out, the straight line that can form corner is determined, and calculate and to be formed between the straight line of corner
Intersection point;And be grouped calculated all intersection points according to region, at least one set of intersection point, selects and regarded with described
The central point of frequency image is apart from maximum intersection point, and using two straight lines where the intersection point as target line, wherein described to regard
Frequency image with where center point horizontal line and vertical line be divided into four regions.
12. such as claim 8~11 any one of them device, which is characterized in that the target area identification module is additionally operable to:
By the parameter information of the determining target area, it is added in the video image and is shown;
If receiving adjust instruction, the size of the target area is adjusted according to the adjust instruction, by the target area after adjustment
Domain is as final target area;
If not receiving adjust instruction, using the determining target area as final target area.
13. such as claim 8~11 any one of them device, which is characterized in that described device further includes:
Mobile detection unit is handled as follows for obtaining Oscillation Amplitude in real time for the Oscillation Amplitude currently got:
If the Oscillation Amplitude currently got is more than the first threshold of setting, the target area is redefined;If currently getting
Oscillation Amplitude is more than the second threshold of setting and is not more than the first threshold, carries out stabilization processing;Wherein, the first threshold
More than the second threshold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310292305.XA CN104301596B (en) | 2013-07-11 | 2013-07-11 | A kind of method for processing video frequency and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310292305.XA CN104301596B (en) | 2013-07-11 | 2013-07-11 | A kind of method for processing video frequency and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104301596A CN104301596A (en) | 2015-01-21 |
CN104301596B true CN104301596B (en) | 2018-09-25 |
Family
ID=52321142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310292305.XA Active CN104301596B (en) | 2013-07-11 | 2013-07-11 | A kind of method for processing video frequency and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104301596B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105991920A (en) * | 2015-02-09 | 2016-10-05 | 钱仰德 | Method of using image cutting to make mobile phone capturing frame automatically track object |
CN105263049B (en) * | 2015-10-28 | 2019-10-29 | 努比亚技术有限公司 | A kind of video Scissoring device, method and mobile terminal based on frame coordinate |
CN106817533A (en) * | 2015-11-27 | 2017-06-09 | 小米科技有限责任公司 | Image processing method and device |
CN105323491B (en) * | 2015-11-27 | 2019-04-23 | 小米科技有限责任公司 | Image capturing method and device |
CN105812667A (en) * | 2016-04-15 | 2016-07-27 | 张磊 | System and method for photographing PPT rapidly |
CN106060411B (en) * | 2016-07-29 | 2019-08-16 | 努比亚技术有限公司 | A kind of focusing mechanism, method and terminal |
CN106657771A (en) * | 2016-11-21 | 2017-05-10 | 青岛海信移动通信技术股份有限公司 | PowerPoint data processing method and mobile terminal |
CN107809670B (en) * | 2017-10-31 | 2020-05-12 | 长光卫星技术有限公司 | Video editing system and method suitable for large-area-array meter-level high-resolution satellite |
CN110830846B (en) * | 2018-08-07 | 2022-02-22 | 阿里巴巴(中国)有限公司 | Video clipping method and server |
CN113298845A (en) * | 2018-10-15 | 2021-08-24 | 华为技术有限公司 | Image processing method, device and equipment |
CN110298380A (en) * | 2019-05-22 | 2019-10-01 | 北京达佳互联信息技术有限公司 | Image processing method, device and electronic equipment |
CN110796012B (en) * | 2019-09-29 | 2022-12-27 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and readable storage medium |
CN111986289A (en) * | 2020-08-20 | 2020-11-24 | 广联达科技股份有限公司 | Searching method and device for closed area and electronic equipment |
CN112752158B (en) * | 2020-12-29 | 2023-06-20 | 北京达佳互联信息技术有限公司 | Video display method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101399919A (en) * | 2007-09-25 | 2009-04-01 | 展讯通信(上海)有限公司 | Method for automatic exposure and automatic gain regulation and method thereof |
CN102592279A (en) * | 2011-12-31 | 2012-07-18 | 北京麦哲科技有限公司 | Camera-based visual black edge removing method |
CN103152550A (en) * | 2013-02-22 | 2013-06-12 | 华为技术有限公司 | Implementing method of electronic cloud deck, front end device and receiving device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009265692A (en) * | 2008-04-21 | 2009-11-12 | Pfu Ltd | Notebook type information processor and image reading method |
CN102541494B (en) * | 2010-12-30 | 2016-01-06 | 中国科学院声学研究所 | A kind of video size converting system towards display terminal and method |
CN203039812U (en) * | 2012-10-31 | 2013-07-03 | 宁波迪吉特电子科技发展有限公司 | Video compression device based on video content analysis |
-
2013
- 2013-07-11 CN CN201310292305.XA patent/CN104301596B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101399919A (en) * | 2007-09-25 | 2009-04-01 | 展讯通信(上海)有限公司 | Method for automatic exposure and automatic gain regulation and method thereof |
CN102592279A (en) * | 2011-12-31 | 2012-07-18 | 北京麦哲科技有限公司 | Camera-based visual black edge removing method |
CN103152550A (en) * | 2013-02-22 | 2013-06-12 | 华为技术有限公司 | Implementing method of electronic cloud deck, front end device and receiving device |
Also Published As
Publication number | Publication date |
---|---|
CN104301596A (en) | 2015-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104301596B (en) | A kind of method for processing video frequency and device | |
JP6730690B2 (en) | Dynamic generation of scene images based on the removal of unwanted objects present in the scene | |
US9591237B2 (en) | Automated generation of panning shots | |
CN103002210B (en) | Image processing apparatus and image processing method | |
KR101775253B1 (en) | Real-time automatic conversion of 2-dimensional images or video to 3-dimensional stereo images or video | |
CN104376545B (en) | A kind of method and a kind of electronic equipment of information processing | |
US9378583B2 (en) | Apparatus and method for bidirectionally inpainting occlusion area based on predicted volume | |
JP2010021698A (en) | Image processing device, and program | |
JP7387434B2 (en) | Image generation method and image generation device | |
CN105264567A (en) | Methods of image fusion for image stabilizaton | |
JP6610535B2 (en) | Image processing apparatus and image processing method | |
WO2015177845A1 (en) | Image-processing device | |
JP2006067521A (en) | Image processor, image processing method, image pickup device, and image pickup method | |
JP5755571B2 (en) | Virtual viewpoint image generation device, virtual viewpoint image generation method, control program, recording medium, and stereoscopic display device | |
CN111612878B (en) | Method and device for making static photo into three-dimensional effect video | |
JP2010114752A (en) | Device and method of imaging and program | |
CN111144491A (en) | Image processing method, device and electronic system | |
JP2013172446A (en) | Information processor, terminal, imaging apparatus, information processing method, and information provision method in imaging apparatus | |
TW201824178A (en) | Image processing method for immediately producing panoramic images | |
JPWO2013069171A1 (en) | Image processing apparatus and image processing method | |
US9002128B2 (en) | Image processing apparatus, integrated circuit, program, imaging apparatus, and display apparatus | |
JP2019175112A (en) | Image processing device, photographing device, image processing method, and program | |
CN109995985B (en) | Panoramic image shooting method and device based on robot and robot | |
JP4930304B2 (en) | Image processing apparatus, image processing method, program, and recording medium | |
CN110089103B (en) | Demosaicing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20190930 Address after: Room 1101, Wanguo building office, intersection of Tongling North Road and North 2nd Ring Road, Xinzhan District, Hefei City, Anhui Province, 230000 Patentee after: Hefei Torch Core Intelligent Technology Co., Ltd. Address before: 519085 High-tech Zone, Tangjiawan Town, Zhuhai City, Guangdong Province Patentee before: Torch Core (Zhuhai) Technology Co., Ltd. |