CN105120302A - Video processing method and device - Google Patents
Video processing method and device Download PDFInfo
- Publication number
- CN105120302A CN105120302A CN201510536868.8A CN201510536868A CN105120302A CN 105120302 A CN105120302 A CN 105120302A CN 201510536868 A CN201510536868 A CN 201510536868A CN 105120302 A CN105120302 A CN 105120302A
- Authority
- CN
- China
- Prior art keywords
- gray
- video image
- video
- image
- scale map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title abstract 3
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims abstract description 35
- 230000008569 process Effects 0.000 claims description 11
- 238000005530 etching Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 14
- 230000015654 memory Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000000712 assembly Effects 0.000 description 3
- 238000000429 assembly Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/205—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
- H04N5/208—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a video processing method and device, and belongs to the technical field of information. The method comprises the following steps of acquiring a plurality of video images to be processed; for each video image in the plurality of video images, generating a gray-scale map of the video image according to the edge strength of a pixel of the video image, wherein the gray scale of each pixel in the gray-scale map is the edge strength of a corresponding pixel in the video image; and performing sharpening processing on the video image based on the gray-scale map. By adopting the video processing method of the invention, an area with high edge strength that should be sharpened in the video image can be selectively to be sharpened; therefore, compared with a method for performing sharpening processing on the whole video image, the method avoids excessive sharpening and improves a viewing effect of videos.
Description
Technical field
The present invention relates to areas of information technology, particularly a kind of method for processing video frequency and device.
Background technology
Along with the development of information technology, the requirement of people to video-see effect is more and more higher.But the restriction of bandwidth Network Based, video needs to reduce encoding rate could be transmitted glibly.And then how when low encoding rate, the viewing effect improving video becomes one of this area important technological problems.
Prior art provides a kind of method for processing video frequency, and the method, after the video receiving low encoding rate, carries out Edge contrast to every two field picture that video packets contains, to improve the viewing effect of video on the basis not changing encoding rate.
Above-mentioned method for processing video frequency to be difficult to distinguish in every two field picture should sharpening region with should not the region of sharpening, and then likely to the region of sharpening sharpening should do not carried out, cause video-see effect to decline.
Summary of the invention
In order to solve the problem of prior art, embodiments provide a kind of method for processing video frequency and device.This technical scheme is as follows:
On the one hand, provide a kind of method for processing video frequency, the method comprises:
Obtain pending multiple video images;
For each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image;
Based on this gray-scale map, Edge contrast is carried out to this video image.
On the other hand, provide a kind of video process apparatus, this device comprises:
Acquisition module, for obtaining pending multiple video images;
Generation module, for for each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image;
Processing module, for based on this gray-scale map, carries out Edge contrast to this video image.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
By obtaining pending multiple video images; For each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image; Based on this gray-scale map, Edge contrast is carried out to this video image.Adopt this method, Edge contrast is carried out in region that can be high to the edge strength that should carry out sharpening in video image selectively, compared to the method for whole video image being carried out to Edge contrast, avoids doing over-sharpening, improves the viewing effect of video.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart of a kind of method for processing video frequency that the embodiment of the present invention provides;
Fig. 2 is the flow chart of a kind of method for processing video frequency that the embodiment of the present invention provides;
Fig. 3 is the block diagram of a kind of video process apparatus that the embodiment of the present invention provides;
Fig. 4 is the block diagram of a kind of terminal 400 that the embodiment of the present invention provides;
Fig. 5 is the block diagram of a kind of server 500 that the embodiment of the present invention provides.
Embodiment
For making the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Fig. 1 is the flow chart of a kind of method for processing video frequency that the embodiment of the present invention provides.See Fig. 1, the method comprises:
101, pending multiple video images are obtained.
102, for each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image.
103, based on this gray-scale map, Edge contrast is carried out to this video image.
Adopt this method, Edge contrast is carried out in region that can be high to the edge strength that should carry out sharpening in video image selectively, compared to the method for whole video image being carried out to Edge contrast, avoids doing over-sharpening, improves the viewing effect of video.
Alternatively, the method also comprises: application arithmetic operators, determines the edge strength of each pixel in this video image.
Alternatively, based on this gray-scale map, Edge contrast should be carried out to this video image and comprises: expansive working and/or Gaussian Blur operation are carried out to this gray-scale map, obtains the first intermediate image; To this first intermediate image, perform etching operation, obtain the second intermediate image; Based on this second intermediate image, Edge contrast is carried out to this video image.
Alternatively, based on this gray-scale map, Edge contrast should be carried out to this video image and comprises: using this gray-scale map as masking-out, based on this masking-out, Edge contrast is carried out to this image.
Alternatively, multiple video images that this acquisition is pending comprise: carry out text detection to multiple initial video images of video, will comprise multiple video images of word as pending image.
Above-mentioned all alternatives, can adopt and combine arbitrarily formation optional embodiment of the present invention, this is no longer going to repeat them.
Fig. 2 is the flow chart of a kind of method for processing video frequency that the embodiment of the present invention provides.See Fig. 2, the method comprises:
201, pending multiple video images are obtained.
In embodiments of the present invention, the plurality of video image can be the video image of every frame in pending video.This video can, for being stored in local video file, also can be the video stream media by Internet Transmission.
The present invention recognizes, when watching video, the attention rate of user to text in video region is higher, and then in order to save process resource, effectively can improve again the viewing effect of video, and the present invention wishes only to process the video image comprising word.In order to realize such object, the embodiment of the present invention also provided one for determining the optional step of pending image before execution this step 201, specifically comprise: text detection is carried out to multiple initial video images of video, will multiple video images of word be comprised as pending multiple video images.This word comprises: the word that in the captions of video, the head title of video and video, other occur.
202, for each video image in the plurality of video image, application arithmetic operators, determines the edge strength of each pixel in this video image.
The present invention recognizes, when watching video, the region attention rate that user concentrates edge in video is higher, this concentrated area, edge comprises: character area, live status display area of playing, the eye areas etc. of face, so the present invention's application arithmetic operators, determine the edge strength of each pixel in this video image, to extract this concentrated area, edge, to realize the process to this concentrated area, edge, and then improve the viewing effect of video.
This edge refers to that surrounding pixel gray scale has the set of those pixels of Spline smoothing or roof change.In embodiments of the present invention, this arithmetic operators can be differential operator.Based on this arithmetic operators, determine that the concrete steps of the edge strength of each pixel in this video image comprise: for pixel each in pending image, extract the gray scale of the multiple pixels in this neighborhood of pixel points, plane gradient is asked to the gray scale of the plurality of pixel, using the edge strength of the numerical value of this plane gradient as this pixel.
It should be noted that, this arithmetic operators can also be Robert's operator (RobertOperator), Sobel Operator (SobelOperator), Pu Ruiweite operator (PrewittOperator) etc., and the concrete form that edge of the present invention extracts operator is not construed as limiting.
203, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image.
In embodiments of the present invention, based on the edge strength of each pixel determined in step 202, with the edge strength of this each pixel, generate gray-scale map, each pixel one_to_one corresponding in each pixel in this gray-scale map and this video image, in gray-scale map, the gray value of each pixel equals, or is proportional to the edge strength of this pixel in video image.
204, based on this gray-scale map, Edge contrast is carried out to this video image.
The present invention recognizes, in video image, concentrated area, edge by sharpening, should that is to say and carry out to it viewing effect that sharpening can improve this video; Lack edge in this video image and present continually varying non-edge concentrated area should by sharpening, that is to say and sharpening may be caused excessively to reduce the viewing effect of this video to its sharpening.And then, in order to only Edge contrast is carried out in edge concentrated area, based on the gray-scale map generated by edge strength in step 203, Edge contrast is carried out to this video image.
Particularly, based on this gray-scale map, carry out Edge contrast and comprise: using this gray-scale map as masking-out, carry out Edge contrast based on this masking-out to this image to this video image, this masking-out is for determining the region of carrying out Edge contrast.More specifically, the part of low-light level in this gray-scale map is defined as the region of weak Edge contrast, the part of high brightness in this gray-scale map is defined as the region of strong Edge contrast.
In order to determine this concentrated area, edge more accurately, in embodiments of the present invention, based on this gray-scale map, carry out Edge contrast to this video image also to comprise: process this gray-scale map, make this gray-scale map can represent the concentrated area, edge of this video image exactly, based on the gray-scale map after process, this video image is carried out to the process of Edge contrast.Particularly, this process can comprise the following steps 204A-205D:
204A, expansive working is carried out to this gray-scale map.
This expansive working is for expanding the part that in this gray-scale map, brightness is high.Particularly, this expansive working comprises: according to the sliding window of this gray-scale map determination arbitrary shape, such as, if this gray-scale map is the rectangle picture of 16:9, then can determine the rectangular sliding window of a 16:9; Make this sliding window streak this gray-scale map, the pixel of maximum brightness in the region covered by this sliding window in this gray-scale map is extracted; On this gray-scale map, the pixel of the regional center position covered by this sliding window with the pixel replacement of this maximum brightness.
204B, Gaussian Blur operation is carried out to this gray-scale map, obtain the first intermediate image.
After this expansive working, in order to make the high part of the brightness after expanding can seamlessly transit with remainder, Gaussian Blur operation can be carried out to this gray-scale map further.The operation of this Gaussian Blur refers to, the Gaussian matrix of this gray-scale map and suitable size is carried out convolution algorithm, and then obtains the operation of blur effect.
204C, to this first intermediate image, perform etching operation, obtain the second intermediate image.
This etching operation is used for the part that in suitable this gray-scale map of contraction, brightness is high.With the expansive working in 204A accordingly, this etching operation comprises: according to the sliding window of this gray-scale map determination arbitrary shape, such as, if this gray-scale map is the rectangle of 16:9, then can determine the rectangular sliding window of a 16:9, make this sliding window streak this gray-scale map, the pixel of minimum brightness in the region covered by this sliding window in this gray-scale map is extracted; On this gray-scale map, the pixel of the regional center position covered by this sliding window with the pixel replacement of this minimum brightness.
It should be noted that, step 204A, step 204B and step 204C are used for suitably adjusting the scope of the high part of brightness of this gray-scale map, and the part smooth connection making part that in this gray-scale map, brightness is high low with brightness, in actual applications, sequence above-mentioned steps 204A-204C can be reconfigured, the embodiment of the present invention is only described in a preferred manner, does not do concrete restriction to the technical scheme reconfiguring sequence above-mentioned steps 204A-204C formation.
204D, based on this second intermediate image, Edge contrast is carried out to this video image.
In order to only Edge contrast is carried out in edge concentrated area, using this second intermediate image as masking-out, based on this masking-out, Edge contrast is carried out to this image.
The method that the embodiment of the present invention provides is by obtaining pending multiple video images; For each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image; Based on this gray-scale map, Edge contrast is carried out to this video image.Adopt this method, Edge contrast is carried out in region that can be high to the edge strength that should carry out sharpening in video image selectively, compared to the method for whole video image being carried out to Edge contrast, avoids doing over-sharpening, improves the viewing effect of video.
Fig. 3 is the block diagram of a kind of video process apparatus that the embodiment of the present invention provides.See Fig. 3, this device comprises:
Acquisition module 301, for obtaining pending multiple video images;
Generation module 302, for for each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image;
Processing module 303, for based on this gray-scale map, carries out Edge contrast to this video image.
The device that the embodiment of the present invention provides, by obtaining pending multiple video images; For each video image in the plurality of video image, according to the pixel edge strength of this video image, generate the gray-scale map of this video image, in this gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in this video image; Based on this gray-scale map, Edge contrast is carried out to this video image.Adopt this method, Edge contrast is carried out in region that can be high to the edge strength that should carry out sharpening in video image selectively, compared to the method for whole video image being carried out to Edge contrast, avoids doing over-sharpening, improves the viewing effect of video.
Alternatively, this device also comprises:
Determination module, for applying arithmetic operators, determines the edge strength of each pixel in this video image.
Alternatively, this processing module is used for: carry out expansive working and/or Gaussian Blur operation to this gray-scale map, obtain the first intermediate image; To this first intermediate image, perform etching operation, obtain the second intermediate image; Based on this second intermediate image, Edge contrast is carried out to this video image.
Alternatively, this processing module is used for: using this gray-scale map as masking-out, carries out Edge contrast based on this masking-out to this image.
Alternatively, this acquisition module also for: text detection is carried out to multiple initial video images of video, will multiple video images of word be comprised as pending image.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can have been come by hardware, the hardware that also can carry out instruction relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium mentioned can be read-only memory, disk or CD etc.
Fig. 4 is the block diagram of a kind of terminal 400 that the embodiment of the present invention provides.Such as, terminal 400 can be mobile phone, computer, digital TV terminal, information receiving and transmitting terminal, tablet terminal, personal digital assistant etc.
With reference to Fig. 4, terminal 400 can comprise following one or more assembly: processing components 402, memory 404, power supply module 406, multimedia groupware 408, audio-frequency assembly 410, the interface 412 of I/O (I/O), sensor cluster 414, and communications component 416.
The integrated operation of the usual control terminal 400 of processing components 402, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 402 can comprise one or more processor 420 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 402 can comprise one or more module, and what be convenient between processing components 402 and other assemblies is mutual.Such as, processing components 402 can comprise multi-media module, mutual with what facilitate between multimedia groupware 408 and processing components 402.
Memory 404 is configured to store various types of data to be supported in the operation of terminal 400.The example of these data comprises for any application program of operation in terminal 400 or the instruction of method, contact data, telephone book data, message, picture, video etc.Memory 404 can be realized by the volatibility of any type or non-volatile memories terminal or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 406 is terminal 400 provide electric power.Power supply module 406 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for terminal 400 and be associated.
Multimedia groupware 408 is included in the screen providing an output interface between described terminal 400 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 408 comprises a front-facing camera and/or post-positioned pick-up head.When terminal 400 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 410 is configured to export and/or input audio signal.Such as, audio-frequency assembly 410 comprises a microphone (MIC), and when terminal 400 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 404 further or be sent via communications component 416.In certain embodiments, audio-frequency assembly 410 also comprises a loud speaker, for output audio signal.
I/O interface 412 is for providing interface between processing components 402 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 414 comprises one or more transducer, for providing the state estimation of various aspects for terminal 400.Such as, sensor cluster 414 can detect the opening/closing state of terminal 400, the relative positioning of assembly, such as described assembly is display and the keypad of terminal 400, the position of all right sense terminals 400 of sensor cluster 414 or terminal 400 1 assemblies changes, the presence or absence that user contacts with terminal 400, the variations in temperature of terminal 400 orientation or acceleration/deceleration and terminal 400.Sensor cluster 414 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 414 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor cluster 414 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communications component 416 is configured to the communication being convenient to wired or wireless mode between terminal 400 and other-end.Terminal 400 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communication component 416 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communication component 416 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, terminal 400 can be realized, for performing method for processing video frequency shown in above-mentioned Fig. 1 or Fig. 2 by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), Digital Signal Processing terminal (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 404 of instruction, above-mentioned instruction can perform said method by the processor 420 of terminal 400.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage terminal etc.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of terminal, make terminal can perform above-mentioned method for processing video frequency.
Fig. 5 is the block diagram of a kind of server 500 that the embodiment of the present invention provides.With reference to Fig. 5, server 500 comprises processing components 522, and it comprises one or more processor further, and the memory resource representated by memory 532, can such as, by the instruction of the execution of processing unit 522, application program for storing.The application program stored in memory 532 can comprise each module corresponding to one group of instruction one or more.In addition, processing components 522 is configured to perform instruction, to perform method for processing video frequency in above-mentioned Fig. 1 or Fig. 2.
Server 500 can also comprise the power management that a power supply module 525 is configured to perform server 500, and a wired or wireless network interface 550 is configured to server 500 to be connected to network, and input and output (I/O) interface 558.Server 500 can operate the operating system based on being stored in memory 532, such as WindowsServer
tM, MacOSX
tM, Unix
tM, Linux
tM, FreeBSD
tMor it is similar.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.
Claims (10)
1. a method for processing video frequency, is characterized in that, described method comprises:
Obtain pending multiple video images;
For each video image in described multiple video image, according to the pixel edge strength of described video image, generate the gray-scale map of described video image, in described gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in described video image;
Based on described gray-scale map, Edge contrast is carried out to described video image.
2. method according to claim 1, is characterized in that, described method also comprises:
Application arithmetic operators, determines the edge strength of each pixel in described video image.
3. method according to claim 1, is characterized in that, described based on described gray-scale map, carries out Edge contrast comprise described video image:
Expansive working and/or Gaussian Blur operation are carried out to described gray-scale map, obtains the first intermediate image;
To described first intermediate image, perform etching operation, obtain the second intermediate image;
Based on described second intermediate image, Edge contrast is carried out to described video image.
4. method according to claim 1, is characterized in that, described based on described gray-scale map, carries out Edge contrast comprise described video image:
Using described gray-scale map as masking-out, based on described masking-out, Edge contrast is carried out to described image.
5. method according to claim 1, is characterized in that, the pending multiple video images of described acquisition comprise:
Text detection is carried out to multiple initial video images of video, will multiple video images of word be comprised as pending image.
6. a video process apparatus, is characterized in that, described device comprises:
Acquisition module, for obtaining pending multiple video images;
Generation module, for for each video image in described multiple video image, according to the pixel edge strength of described video image, generate the gray-scale map of described video image, in described gray-scale map, the gray scale of each pixel is the edge strength of corresponding pixel points in described video image;
Processing module, for based on described gray-scale map, carries out Edge contrast to described video image.
7. device according to claim 6, is characterized in that, described device also comprises:
Determination module, for applying arithmetic operators, determines the edge strength of each pixel in described video image.
8. device according to claim 6, is characterized in that, described processing module is used for: carry out expansive working and/or Gaussian Blur operation to described gray-scale map, obtain the first intermediate image; To described first intermediate image, perform etching operation, obtain the second intermediate image; Based on described second intermediate image, Edge contrast is carried out to described video image.
9. device according to claim 6, is characterized in that, described processing module is used for: using described gray-scale map as masking-out, carries out Edge contrast based on described masking-out to described image.
10. device according to claim 6, is characterized in that, described acquisition module also for: text detection is carried out to multiple initial video images of video, will multiple video images of word be comprised as pending image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510536868.8A CN105120302A (en) | 2015-08-27 | 2015-08-27 | Video processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510536868.8A CN105120302A (en) | 2015-08-27 | 2015-08-27 | Video processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105120302A true CN105120302A (en) | 2015-12-02 |
Family
ID=54668148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510536868.8A Pending CN105120302A (en) | 2015-08-27 | 2015-08-27 | Video processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105120302A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105791636A (en) * | 2016-04-07 | 2016-07-20 | 潍坊科技学院 | Video processing system |
CN108053371A (en) * | 2017-11-30 | 2018-05-18 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080112640A1 (en) * | 2006-11-09 | 2008-05-15 | Sang Wook Park | Apparatus and method for sharpening blurred enlarged image |
US7643698B2 (en) * | 2005-12-22 | 2010-01-05 | Apple Inc. | Image sharpening using diffusion |
CN102547147A (en) * | 2011-12-28 | 2012-07-04 | 上海聚力传媒技术有限公司 | Method for realizing enhancement processing for subtitle texts in video images and device |
CN103617600A (en) * | 2013-11-25 | 2014-03-05 | 厦门美图网科技有限公司 | Method for automatically sharpening image based on edge detection |
-
2015
- 2015-08-27 CN CN201510536868.8A patent/CN105120302A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7643698B2 (en) * | 2005-12-22 | 2010-01-05 | Apple Inc. | Image sharpening using diffusion |
US20080112640A1 (en) * | 2006-11-09 | 2008-05-15 | Sang Wook Park | Apparatus and method for sharpening blurred enlarged image |
CN102547147A (en) * | 2011-12-28 | 2012-07-04 | 上海聚力传媒技术有限公司 | Method for realizing enhancement processing for subtitle texts in video images and device |
CN103617600A (en) * | 2013-11-25 | 2014-03-05 | 厦门美图网科技有限公司 | Method for automatically sharpening image based on edge detection |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105791636A (en) * | 2016-04-07 | 2016-07-20 | 潍坊科技学院 | Video processing system |
CN108053371A (en) * | 2017-11-30 | 2018-05-18 | 努比亚技术有限公司 | A kind of image processing method, terminal and computer readable storage medium |
CN108053371B (en) * | 2017-11-30 | 2022-04-19 | 努比亚技术有限公司 | Image processing method, terminal and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105488511A (en) | Image identification method and device | |
CN105069786A (en) | Straight line detection method and straight line detection device | |
CN104881104A (en) | Intelligent equipment, intelligent equipment performance improvement method and intelligent equipment performance improvement device | |
CN104378570A (en) | Sound recording method and device | |
CN104243819A (en) | Photo acquiring method and device | |
CN105828201A (en) | Video processing method and device | |
CN105631797A (en) | Watermarking method and device | |
CN104361558A (en) | Image processing method, device and equipment | |
CN105469356A (en) | Human face image processing method and apparatus thereof | |
CN105046260A (en) | Image pre-processing method and apparatus | |
CN104850852A (en) | Feature vector calculation method and device | |
CN104219445A (en) | Method and device for adjusting shooting modes | |
CN104090735A (en) | Picture projection method and device | |
CN105139378A (en) | Card boundary detection method and apparatus | |
CN105323491A (en) | Image shooting method and device | |
CN104517271A (en) | Image processing method and device | |
CN104598534A (en) | Picture folding method and device | |
CN105138956A (en) | Face detection method and device | |
CN105426079A (en) | Picture brightness adjustment method and apparatus | |
CN105120155A (en) | Panoramic photograph generation method and device | |
CN104484867A (en) | Picture processing method and device | |
CN104407769A (en) | Picture processing method, device and equipment | |
CN105208284A (en) | Photographing reminding method and device | |
CN105335714A (en) | Photograph processing method, device and apparatus | |
CN107507128B (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20151202 |