CN109495767A - Method and apparatus for output information - Google Patents
Method and apparatus for output information Download PDFInfo
- Publication number
- CN109495767A CN109495767A CN201811445250.0A CN201811445250A CN109495767A CN 109495767 A CN109495767 A CN 109495767A CN 201811445250 A CN201811445250 A CN 201811445250A CN 109495767 A CN109495767 A CN 109495767A
- Authority
- CN
- China
- Prior art keywords
- video frame
- value
- pixel
- loudness
- loudness value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000009877 rendering Methods 0.000 claims abstract description 38
- 230000004044 response Effects 0.000 claims abstract description 11
- 230000008859 change Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 12
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 9
- 230000006854 communication Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005194 fractionation Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000010485 coping Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000002889 sympathetic effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for output information.One specific embodiment of this method includes: that loudness value is extracted from audio data in response to receiving audio data.Obtain video frame to be processed.The color value of the pixel in video frame is adjusted according to loudness value.Video frame after output adjustment.The embodiment, which is realized, carries out special video effect rendering based on audio.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for output information.
Background technique
During mobile terminal carries out video content creation, the primary demand that special efficacy is most users is added for video.
In current major application market, the application of various addition special efficacys is contained, and these applications are only limited to regarding
Special effect processing is carried out in the dimension of frequency stream.It is known that a good video work, it should not be only in visual aspects to content
Consumer transmits information, should more guide in terms of the sense of hearing to audient.Audio is that video content producers and audient hand over
The important channel of stream.The bulk information carried in audio passes to brain via the auditory system of audient after treatment, with
Video information collaboration impresses in the brain of audient.
In being desirable to there can be a kind of mode, audio-frequency information is showed in the form of video, enhances audient to audio
The perception of information makes the vision of audient and the sense of hearing empathize.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for output information.
In a first aspect, the embodiment of the present application provides a kind of method for output information, comprising: in response to receiving
Audio data extracts loudness value from audio data;Obtain video frame to be processed;It is adjusted in video frame according to loudness value
Pixel color value;Video frame after output adjustment.
In some embodiments, the color value of the pixel in video frame is adjusted according to loudness value, comprising: according to loudness value
Determine rendering distance, wherein rendering distance is directly proportional to loudness value;For the pixel in video frame, according in predetermined direction
The upper and pixel adjusts the color value of the pixel at a distance of the color value of the target pixel points of rendering distance.
In some embodiments, the color value of the pixel in video frame is adjusted according to loudness value, comprising: according to loudness value
Determine hue error value, wherein hue error value is directly proportional to loudness value;For the pixel in video frame, which is adjusted according to hue error value
The color value of vegetarian refreshments.
In some embodiments, the color value of the pixel in video frame is adjusted according to loudness value, comprising: according to loudness value
Determine image shift amount, wherein image shift amount is directly proportional to loudness value;Video frame is deviated according to image shift amount
To generate the video frame after offset;Video frame before video frame and offset after offset is overlapped.
In some embodiments, this method further include: superposition proportion is determined according to loudness value, wherein superposition is matched and rung
Angle value is directly proportional;Obtain the first parameter of preset filter template;The first parameter of adjustment filter template is matched according to superposition;
Use the color value of the pixel in filter template adjusted adjustment video frame.
In some embodiments, this method further include: frequency values are extracted from audio data;It is adjusted according to frequency values
Video frame.
In some embodiments, video frame is adjusted according to frequency values, comprising: object brightness is determined according to frequency values,
In, object brightness is directly proportional to frequency values;For the pixel in video frame, the bright of the pixel is adjusted according to object brightness
Degree.
In some embodiments, video frame is adjusted according to frequency values, comprising: filter intensity is determined according to frequency values,
In, filter intensity is directly proportional to frequency values;Obtain the second parameter of preset filter template;Filter is adjusted according to filter intensity
Second parameter of template;Use the color value of the pixel in filter template adjusted adjustment video frame.
In some embodiments, video frame is adjusted according to frequency values, comprising: change according to frequency values whole in video frame
Or the color value of partial region.
In some embodiments, video frame is adjusted according to frequency values, comprising: video frame is added to according to frequency values change
On special efficacy element color value.
Second aspect, the embodiment of the present application provide a kind of device for output information, comprising: extraction unit is matched
It is set in response to receiving audio data, loudness value is extracted from audio data;Acquiring unit is configured to obtain wait locate
The video frame of reason;Adjustment unit is configured to adjust the color value of the pixel in video frame according to loudness value;Output unit, quilt
Video frame after being configured to output adjustment.
In some embodiments, adjustment unit is further configured to: determining rendering distance according to loudness value, wherein wash with watercolours
It is directly proportional to loudness value to contaminate distance;For the pixel in video frame, according in a predetermined direction with the pixel at a distance of rendering
The color value of the target pixel points of distance adjusts the color value of the pixel.
In some embodiments, adjustment unit is further configured to: determining hue error value according to loudness value, wherein colour cast
It is worth directly proportional to loudness value;For the pixel in video frame, the color value of the pixel is adjusted according to hue error value.
In some embodiments, adjustment unit is further configured to: determining image shift amount according to loudness value, wherein
Image shift amount is directly proportional to loudness value;Video frame is deviated to generate the video frame after offset according to image shift amount;
Video frame before video frame and offset after offset is overlapped.
In some embodiments, adjustment unit is further configured to: determining superposition proportion according to loudness value, wherein folded
Add proportion directly proportional to loudness value;Obtain the first parameter of preset filter template;Adjustment filter template is matched according to superposition
First parameter;Use the color value of the pixel in filter template adjusted adjustment video frame.
In some embodiments, extraction unit is further configured to: frequency values are extracted from audio data;And
Adjustment unit is further configured to: adjusting video frame according to frequency values.
In some embodiments, adjustment unit is further configured to: determining object brightness according to frequency values, wherein mesh
It is directly proportional to frequency values to mark brightness;For the pixel in video frame, the brightness of the pixel is adjusted according to object brightness.
In some embodiments, adjustment unit is further configured to: determining filter intensity according to frequency values, wherein filter
Mirror intensity is directly proportional to frequency values;Obtain the second parameter of preset filter template;Filter template is adjusted according to filter intensity
Second parameter;Use the color value of the pixel in filter template adjusted adjustment video frame.
In some embodiments, adjustment unit is further configured to: changing entirety or portion in video frame according to frequency values
Subregional color value.
In some embodiments, adjustment unit is further configured to: being added in video frame according to frequency values change
The color value of special efficacy element.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress
It sets, is stored thereon with one or more programs, when one or more programs are executed by one or more processors, so that one
Or multiple processors are realized such as method any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program,
Wherein, it realizes when program is executed by processor such as method any in first aspect.
Method and apparatus provided by the embodiments of the present application for output information, by extracting the loudness in audio data
Value and/or frequency values, to adjust the color value and/or brightness of video frame.To realize the special video effect changed based on audio.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application its
Its feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for output information of the application;
Fig. 3 is the schematic diagram according to the mode of the fractionation pigment point value of the method for output information of the application;
Fig. 4 is the schematic diagram according to an application scenarios of the method for output information of the application;
Fig. 5 is the flow chart according to another embodiment of the method for output information of the application;
Fig. 6 a-6d is the video effect figure according to the method for output information of the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the device for output information of the application;
Fig. 8 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that being
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for output information of the application or the reality of the device for output information
Apply the exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101, camera 102, microphone 103 and server
104.Camera 102, microphone 103 can be built in terminal device 101, be also possible to external equipment.Network is at end
End equipment 101, camera 102 provide the medium of communication link between microphone 103 and server 104.Network may include
Various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101 and be interacted by network with server 104, to receive or send message etc..Eventually
Various telecommunication customer end applications, such as the application of video editing class, web browser applications, purchase can be installed in end equipment 101
Species application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101 can be hardware, be also possible to software.When terminal device 101 is hardware, can be has
Display screen and the various electronic equipments for supporting video editing, including but not limited to smart phone, tablet computer, e-book are read
Read device, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression
Standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert
Compression standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101 is soft
When part, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with
To provide Distributed Services), single software or software module also may be implemented into.It is not specifically limited herein.
Server 104 can be to provide the server of various services, such as mention to the video shown on terminal device 101
For the background video rendering server of support.Background video rendering server can be to numbers such as the Video Rendering requests received
According to analyze etc. processing, and processing result (such as video after rendering) is fed back into terminal device.
Server 104 can not also be interacted with terminal device 101, directly the video and microphone of reception camera 102 shooting
The sound of 103 acquisitions.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented
At the distributed server cluster that multiple servers form, individual server also may be implemented into.When server is software,
Multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) may be implemented into,
Single software or software module may be implemented into.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present application for output information can be by terminal device 101
It executes, can also be executed by server 104.Correspondingly, it can be set in terminal device 101 for the device of output information,
Also it can be set in server 104.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for output information according to the application is shown
200.This is used for the method for output information, comprising the following steps:
Step 201, in response to receiving audio data, loudness value is extracted from audio data.
It in the present embodiment, can be with for the executing subject of the method for output information (such as terminal device shown in FIG. 1)
Audio data, also referred to as audio stream information are received by built-in or external microphone.The time span of audio data can be
The duration of one frame of video frame.In the audio data of a frame length, loudness value may be variation, can extract multiple sound
Then angle value determines maximum loudness value, may further determine that out the average value of loudness value.The audio data is through PCM
Audio data after (Pulse Code Modulation, pulse code modulation).Sound is vibration of media in auditory system
The reaction of generation.Sound can always be broken down into the superposition (Fourier transform) of different frequency varying strength sine wave.Audio number
It mainly include two aspect information of frequency and amplitude in.Wherein, amplitude largely represents the loudness information of audio,
Frequency then represents the tone information of audio.Wherein:
(1) loudness reflects the sound intensity of audio.In general, sound is stronger, gets over to the impact force of video viewers
Greatly;Loudness is weaker, then smaller to the impact force of video viewers;
(2) tone reflects the degree of audio tune height, and audio tune is higher, sounds and more feels bright, audio tune
It is lower, it sounds, more feels droning.Why we can experience the height of melody, be because auditory system is to audio-frequency information
Fourier transform is carried out, so that we can tell the music of various different tones.
Under normal circumstances, user simply can only do some videos using loudness information and handle, and then complete to tone information
Do not utilize.Based on considerations above, we by real-time audio stream loudness and tone information extract, select appropriate
Mode is rendered into live video stream.
For real time parsing audio-frequency information, we have chosen fast Fourier transformation algorithm and are located in real time to audio stream
Reason gives the work that human auditory system carries out to computer disposal, gets the tone of current time audio stream, simultaneously also
The loudness information of audio is monitored.During carrying out special efficacy drafting, we define two concepts:
(1) special efficacy intensity: being determined by the loudness of audio, and the more big then intensity of loudness value is stronger;
(2) environment: determine that tone is higher by the tone of audio, atmosphere then embodies bright, positive property;Sound
Tune is lower, and atmosphere then embodies droning, steady property.
Sequentially for two above concept, high performance GPU (Graphics is used in Video Rendering side
Processing Unit, graphics processor) Processing Algorithm realized.Wherein, in terms of special efficacy intensity, for different
Special video effect is expressed by three special efficacy amplitude, special efficacy offset, special efficacy range modes.In terms of environment, for
Different special efficacys is expressed by three video elementary tone, special efficacy element tone, video pictures bright-dark degree modes.
For example, using on mobile terminal system camera provide API (Application Programming Interface,
Application programming interface) Video stream information can be directly obtained.On iOS (Apple Macintosh operating system) end, video stream part
Making Metal, (a kind of rendering application programming interface of low level, lowermost layer needed for providing software guarantee that software can
To operate on different graphic chips) into rendering, audio stream is used by Accelerate (for being advised greatly technology
The mathematical computations and image of mould calculate) the FFT algorithm that accelerates of frame handled.On Android (Android) end, video flowing
Part is rendered using OpenGL (Open Graphics Library, open graphic library), and audio stream then uses the library Ne10 to carry out
FFT operation.
In terms of audio data, whenever getting audio stream information, immediately in deposit caching, until the data volume of caching
When reaching enough into the window size of a FFT (Fast Fourier Transformation, fast Fourier transform), actively
FFT operation is carried out, while obtaining the amplitude (i.e. loudness value) of the section audio.Then loudness value and frequency values and correspondence are saved
Timestamp information into memory, and empty audio buffer data.
Step 202, video frame to be processed is obtained.
In the present embodiment, video frame to be processed refers to the video frame rendered.It can be and recording
Video flowing in a frame, the frame being also possible in the video flowing recorded.It can collect after the sound of a frame immediately
It is rendered for video frame, that is, one frame of the time difference of audio data and video frame.The rendering of video can be carried out by video flowing
Driving, whenever getting a video frame information, then data nearest to the memory request time for having audio-frequency information, together
When the temporal information of video frame and audio-frequency information is compared, the time of audio data should not be excessive earlier than video frame, here
Threshold value appropriate can be used to guarantee the synchronization in visual effect.
Step 203, the color value of the pixel in video frame is adjusted according to loudness value.
In the present embodiment, it is based on special efficacy intensity property, may be implemented that audio loudness value tag is rendered to video automatically
Function in picture, rendering mode include but is not limited to following product function effect:
(1) method deviated using pixel is to realize that video integrally shakes.
Image shift amount can be determined according to loudness value, wherein image shift amount is directly proportional to loudness value.It is inclined according to image
Shifting amount deviates to generate the video frame after offset video frame.By the video frame before the video frame and offset after offset into
Row superposition.Here loudness value can be the maximum loudness value in a segment of audio data, can also be in this section audio data
The average value of loudness value.
(2) using the mode of fractionation pixel color value to realize video " shake ".
Rendering distance can be determined according to loudness value, wherein rendering distance is directly proportional to loudness value.For in video frame
Pixel adjusts the pixel according to the color value in a predetermined direction with the pixel at a distance of the target pixel points of rendering distance
Color value.Here loudness value can be the maximum loudness value in audio data, can also be average loudness value.Predetermined direction
It is reducible to be set to optional three directions, each direction in upper and lower, left and right and take one in tri- value of RGB.As shown in Fig. 3, rendering
Single pixel point RGB color value afterwards=(original graph left pixel point R value, original graph upside pixel G value, original graph right side picture
Vegetarian refreshments B value).
(3) using the mode of color value change is carried out to certain area pixel to realize that video is special according to audio loudness value
Sign carries out colour cast.
Hue error value can be determined according to loudness value, wherein hue error value is directly proportional to loudness value.For the pixel in video frame
Point adjusts the color value of the pixel according to hue error value.Here loudness value can be the maximum loudness value in audio data, also
It can be average loudness value.It can be unified by the color value of the pixel in video frame when loudness value is higher than predetermined first threshold
In addition hue error value, but final color value is no more than 255.It can be when loudness value be lower than predetermined second threshold by the picture in video frame
The color value of vegetarian refreshments uniformly subtracts hue error value, but final color value is not less than 0.
(4) using the filter template made in advance, change superposition proportion in real time to realize the change of filter intensity.
Filter template is a kind of mode using color lookup table.It can be using loudness value as fusion matching parameter.Filter mould
Plate can also be using frequency values as fusion matching parameter.In order to distinguish, the parameter influenced by loudness value can be known as the first parameter,
The parameter influenced by frequency values is known as the second parameter.During actual use, the production of color lookup table (filter) can
Energy can be different, for example may select to modify on " tone " attribute of filter with performance " loudness value ", and " full
And degree " modify with performance " frequency values " on attribute.
Above-mentioned 4 kinds can be used in any combination according to the mode that loudness value carries out color value adjustment.
Step 204, the video frame after output adjustment.
In the present embodiment, the video frame after above-mentioned steps tune can be output to display interface.It can real time modifying video
Frame, to realize the render effects of entire video.
It with continued reference to Fig. 4, Fig. 4 is shown according to one of the application scenarios of the method for output information of the present embodiment
It is intended to.In the application scenarios of Fig. 4, terminal device is converted into PCM information by the audio frame that microphone will acquire.Then
PCM information is subjected to Fourier transform by audio decoder again, obtains loudness information and tone information (frequency).Video Rendering
Real-time rendering after the video frame that device is acquired by camera according to loudness value information and tone information adjustment.Finally output regards one by one
Frequency frame, the video frame continuously exported constitute video flowing.
The method provided by the above embodiment of the application carries out special effect processing to video frame by audio data, thus will
Audio-frequency information is showed in the form of video, enhances perception of the audient to audio-frequency information, generates the vision of audient and the sense of hearing
Sympathetic response.
With further reference to Fig. 5, it illustrates the processes 500 of another embodiment of the method for output information.The use
In the process 500 of the method for output information, comprising the following steps:
Step 501, in response to receiving audio data, loudness value is extracted from audio data.
Step 502, video frame to be processed is obtained.
Step 503, the color value of the pixel in video frame is adjusted according to loudness value.
Step 501-503 and step 201-203 are essentially identical, therefore repeat no more.
Step 504, frequency values are extracted from audio data.
In the present embodiment, frequency values can be extracted from audio data by FFT.May have in a segment of audio data
Multiple frequency values.When being subsequently used for adjustment video frame, the maximum frequency values in this section audio data, i.e. peak value frequency can be used
Rate.The average value of the frequency values in this section audio data also can be used.
Step 505, video frame is adjusted according to frequency values.
In the present embodiment, it is based on environment property, may be implemented that audio tones feature is rendered to video picture automatically
The function in face, rendering mode include but is not limited to following product function effect:
(1) atmosphere of corresponding different tone melody is built by changing the video color value in region in whole or in part.
Directly code level be fixed or slightly randomness change color value, and directly use filter area
Not.
(2) to build atmosphere by way of changing the special efficacy element color value being added on video pictures.
Can be subsequent when adding other special video effects, change the color for being added special efficacy.
(3) to build atmosphere by way of changing overall picture bright-dark degree.
Object brightness is determined according to frequency values, wherein object brightness is directly proportional to frequency values;For the picture in video frame
Vegetarian refreshments adjusts the brightness of the pixel according to object brightness.
(4) by using the filter template made in advance, configuration parameter is when changing fusion in real time to build atmosphere.
Filter intensity is determined according to frequency values, wherein filter intensity is directly proportional to frequency values.Obtain preset filter mould
Second parameter of plate.The second parameter of filter template is adjusted according to filter intensity.Video is adjusted using filter template adjusted
The color value of pixel in frame.This method is similar with according to loudness value adjustment the first parameter of filter template, only adjusts
Parameter is different.Saturation degree can be adjusted according to frequency values.
It both can be used alone with upper type, various ways can be also used in combination, produced close with background music melody
In conjunction with audio visual work.
Fig. 6 a-6d illustrates the product effect example that the application may be implemented.Fig. 6 a is that the frequency of audio is high, and loudness is strong
When rendering result.Fig. 6 b is that the frequency of audio is high, rendering result when loudness is weak.Fig. 6 c is that the frequency of audio is low, and loudness is strong
When rendering result.Fig. 6 d is that the frequency of audio is low, rendering result when loudness is weak.In the example, fractionation single pixel is used
The mode of point color value shakes video, and vibration amplitude is controlled by the loudness value of audio.Changed using the mode dynamic for adjusting brightness value
Become the whole shading value of video, the amplitude of change is controlled by the frequency height of audio.
It can be seen that the video frame rendered using high-frequency audio fragment, whole partially bright.Use strong loudness audio piece
The video frame that section renders, color value degrees of offset is big, can visually experience strong vibration.
Step 506, the video frame after output adjustment.
Step 506 is essentially identical with step 204, therefore repeats no more.
From figure 5 it can be seen that the method for output information compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 500 embody the step of rendering by frequency values to video frame.The scheme of the present embodiment description can be with as a result,
More rendering methods are introduced, to realize more fully Video Rendering, greatly reduce ordinary user's production based on audio
The threshold of special video effect allows user's low cost even to produce to zero cost the sound view with regular movements sense, atmosphere sense
Frequency works.Meanwhile high performance processing mode has been used on the platform of mobile terminal, realize real time parsing audio and preview record
The function of video processed.
With further reference to Fig. 7, as the realization to method shown in above-mentioned each figure, this application provides one kind for exporting
One embodiment of the device of information, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically may be used
To be applied in various electronic equipments.
As shown in fig. 7, the device 700 for output information of the present embodiment includes: extraction unit 701, acquiring unit
702, adjustment unit 703 and output unit 704.Wherein, extraction unit 701 is configured in response to receive audio data, from
Loudness value is extracted in audio data.Acquiring unit 702 is configured to obtain video frame to be processed.Adjustment unit 703 is matched
It is set to the color value that the pixel in video frame is adjusted according to loudness value.Output unit 704 is configured to the view after output adjustment
Frequency frame.
In the present embodiment, for the extraction unit 701 of the device of output information 700, acquiring unit 702, adjustment unit
703 and output unit 704 specific processing can with reference in Fig. 2 corresponding embodiment step 201, step 202, step 203,
Step 204.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to loudness
Value determines rendering distance, wherein rendering distance is directly proportional to loudness value;For the pixel in video frame, according in predetermined party
Adjust the color value of the pixel at a distance of the color value of the target pixel points of rendering distance with the pixel upwards.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to loudness
It is worth and determines hue error value, wherein hue error value is directly proportional to loudness value;It, should according to hue error value adjustment for the pixel in video frame
The color value of pixel.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to loudness
It is worth and determines image shift amount, wherein image shift amount is directly proportional to loudness value;Video frame is carried out according to image shift amount inclined
It moves to generate the video frame after offset;Video frame before video frame and offset after offset is overlapped.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to loudness
Value determines superposition proportion, wherein superposition proportion is directly proportional to loudness value;Obtain the first parameter of preset filter template;According to
First parameter of superposition proportion adjustment filter template;Use the color of the pixel in filter template adjusted adjustment video frame
Value.
In some optional implementations of the present embodiment, extraction unit 701 is further configured to: from audio number
Frequency values are extracted in;And adjustment unit 703 is further configured to: adjusting video frame according to frequency values.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to frequency
It is worth and determines object brightness, wherein object brightness is directly proportional to frequency values;For the pixel in video frame, according to object brightness
Adjust the brightness of the pixel.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to frequency
It is worth and determines filter intensity, wherein filter intensity is directly proportional to frequency values;Obtain the second parameter of preset filter template;According to
Second parameter of filter intensity adjustment filter template;Use the color of the pixel in filter template adjusted adjustment video frame
Value.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to frequency
It is worth in change video frame the color value in region in whole or in part.
In some optional implementations of the present embodiment, adjustment unit 703 is further configured to: according to frequency
Value changes the color value for the special efficacy element being added in video frame.
Below with reference to Fig. 8, it illustrates the electronic equipment (ends as shown in Figure 1 for being suitable for being used to realize the embodiment of the present application
End equipment/server) computer system 800 structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, no
The function and use scope for coping with the embodiment of the present application bring any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 is loaded into the program in random access storage device (RAM) 803 from storage section 808
And execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various program sum numbers
According to.CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 also connects
To bus 804.
I/O interface 805 is connected to lower component: the importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section including hard disk etc.
808;And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via
The network of such as internet executes communication process.Driver 810 is also connected to I/O interface 805 as needed.Detachable media
811, such as disk, CD, magneto-optic disk, semiconductor memory etc., are mounted on as needed on driver 810, in order to from
The computer program read thereon is mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable Jie
Computer program in matter, the computer program include the program code for method shown in execution flow chart.Such
In embodiment, which can be downloaded and installed from network by communications portion 809, and/or from detachable
Medium 811 is mounted.When the computer program is executed by central processing unit (CPU) 801, execute in the present processes
The above-mentioned function of limiting.It is situated between it should be noted that computer-readable medium described herein can be computer-readable signal
Matter or computer readable storage medium either the two any combination.Computer readable storage medium for example can be with
System, device or the device of --- but being not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or arbitrarily with
On combination.The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires
Electrical connection, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type can
Program read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), optical memory
Part, magnetic memory device or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, appoints
What includes or the tangible medium of storage program, the program can be commanded execution system, device or device using or with
It is used in combination.And in this application, computer-readable signal media may include in a base band or as carrier wave one
Divide the data-signal propagated, wherein carrying computer-readable program code.The data-signal of this propagation can use more
Kind form, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal is situated between
Matter can also be that any computer-readable medium other than computer readable storage medium, the computer-readable medium can be sent out
It send, propagate or transmits for by the use of instruction execution system, device or device or program in connection.Meter
The program code for including on calculation machine readable medium can transmit with any suitable medium, including but not limited to: wireless, electric wire,
Optical cable, RF etc. or above-mentioned any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, it further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be held as an independent software package
Part executes on the remote computer or holds on a remote computer or server completely on the user computer for row, part
Row.In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network
(LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as using because of spy
Service provider is netted to connect by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can be with
A part of a module, program segment or code is represented, a part of the module, program segment or code includes one or more
A executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, box
Middle marked function can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated
Can actually be basically executed in parallel, they can also be executed in the opposite order sometimes, this according to related function and
It is fixed.It is also noted that the group of each box in block diagram and or flow chart and the box in block diagram and or flow chart
It closes, can be realized with the dedicated hardware based system for executing defined functions or operations, or specialized hardware can be used
Combination with computer instruction is realized.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be passed through
The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor
Including extraction unit, acquiring unit, adjustment unit and output unit.Wherein, the title of these units is not under certain conditions
Constitute restriction to the unit itself, for example, extraction unit be also described as " in response to receiving audio data, from
The unit of loudness value is extracted in audio data ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned meter
Calculation machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that
The device: in response to receiving audio data, loudness value is extracted from audio data.Obtain video frame to be processed.Root
According to the color value of the pixel in loudness value adjustment video frame.Video frame after output adjustment.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Art technology
Personnel should be appreciated that invention scope involved in the application, however it is not limited to skill made of the specific combination of above-mentioned technical characteristic
Art scheme, while should also cover in the case where not departing from the inventive concept, by above-mentioned technical characteristic or its equivalent feature into
Row any combination and the other technical solutions formed.Such as features described above and (but being not limited to) disclosed herein have class
Technical characteristic like function is replaced mutually and the technical solution that is formed.
Claims (18)
1. a kind of method for output information, comprising:
In response to receiving audio data, loudness value is extracted from the audio data;
Obtain video frame to be processed;
The color value of the pixel in the video frame is adjusted according to the loudness value;
Video frame after output adjustment.
2. according to the method described in claim 1, wherein, the pixel adjusted according to the loudness value in the video frame
Color value, comprising:
Rendering distance is determined according to the loudness value, wherein the rendering distance is directly proportional to the loudness value;
For the pixel in the video frame, according in a predetermined direction with the pixel at a distance of the target of the rendering distance
The color value of pixel adjusts the color value of the pixel.
3. according to the method described in claim 1, wherein, the pixel adjusted according to the loudness value in the video frame
Color value, comprising:
Hue error value is determined according to the loudness value, wherein the hue error value is directly proportional to the loudness value;
For the pixel in the video frame, the color value of the pixel is adjusted according to the hue error value.
4. according to the method described in claim 1, wherein, the pixel adjusted according to the loudness value in the video frame
Color value, comprising:
Image shift amount is determined according to the loudness value, wherein described image offset is directly proportional to the loudness value;
The video frame is deviated to generate the video frame after offset according to described image offset;
Video frame before video frame and offset after the offset is overlapped.
5. according to the method described in claim 1, wherein, the pixel adjusted according to the loudness value in the video frame
Color value, comprising:
Superposition proportion is determined according to the loudness value, wherein the superposition proportion is directly proportional to the loudness value;
Obtain the first parameter of preset filter template;
The first parameter of the filter template is adjusted according to the superposition proportion;
The color value of the pixel in the video frame is adjusted using filter template adjusted.
6. method described in one of -5 according to claim 1, wherein the method also includes:
Frequency values are extracted from the audio data;
The video frame is adjusted according to the frequency values.
7. described to adjust the video frame according to the frequency values according to the method described in claim 6, wherein, comprising:
Object brightness is determined according to the frequency values, wherein the object brightness is directly proportional to the frequency values;
For the pixel in the video frame, the brightness of the pixel is adjusted according to the object brightness.
8. described to adjust the video frame according to the frequency values according to the method described in claim 6, wherein, comprising:
Filter intensity is determined according to the frequency values, wherein the filter intensity is directly proportional to the frequency values;
Obtain the second parameter of preset filter template;
The second parameter of the filter template is adjusted according to the filter intensity;
The color value of the pixel in the video frame is adjusted using filter template adjusted.
9. described to adjust the video frame according to the frequency values according to the method described in claim 6, wherein, comprising:
Change in the video frame color value in region in whole or in part according to the frequency values.
10. described to adjust the video frame according to the frequency values according to the method described in claim 6, wherein, comprising:
Change the color value for the special efficacy element being added in the video frame according to the frequency values.
11. a kind of device for output information, comprising:
Extraction unit is configured in response to receive audio data, extracts loudness value from the audio data;
Acquiring unit is configured to obtain video frame to be processed;
Adjustment unit is configured to adjust the color value of the pixel in the video frame according to the loudness value;
Output unit, the video frame after being configured to output adjustment.
12. device according to claim 11, wherein the adjustment unit is further configured to:
Rendering distance is determined according to the loudness value, wherein the rendering distance is directly proportional to the loudness value;
For the pixel in the video frame, according in a predetermined direction with the pixel at a distance of the target of the rendering distance
The color value of pixel adjusts the color value of the pixel.
13. device according to claim 11, wherein the adjustment unit is further configured to:
Hue error value is determined according to the loudness value, wherein the hue error value is directly proportional to the loudness value;
For the pixel in the video frame, the color value of the pixel is adjusted according to the hue error value.
14. device according to claim 11, wherein the adjustment unit is further configured to:
Image shift amount is determined according to the loudness value, wherein described image offset is directly proportional to the loudness value;
The video frame is deviated to generate the video frame after offset according to described image offset;
Video frame before video frame and offset after the offset is overlapped.
15. device according to claim 11, wherein the adjustment unit is further configured to:
Superposition proportion is determined according to the loudness value, wherein the superposition proportion is directly proportional to the loudness value;
Obtain the first parameter of preset filter template;
The first parameter of the filter template is adjusted according to the superposition proportion;
The color value of the pixel in the video frame is adjusted using filter template adjusted.
16. device described in one of 1-15 according to claim 1, wherein the extraction unit is further configured to: from described
Frequency values are extracted in audio data;And
The adjustment unit is further configured to:
The video frame is adjusted according to the frequency values.
17. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-10.
18. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445250.0A CN109495767A (en) | 2018-11-29 | 2018-11-29 | Method and apparatus for output information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445250.0A CN109495767A (en) | 2018-11-29 | 2018-11-29 | Method and apparatus for output information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109495767A true CN109495767A (en) | 2019-03-19 |
Family
ID=65698676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811445250.0A Pending CN109495767A (en) | 2018-11-29 | 2018-11-29 | Method and apparatus for output information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109495767A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110798737A (en) * | 2019-11-29 | 2020-02-14 | 北京达佳互联信息技术有限公司 | Video and audio synthesis method, terminal and storage medium |
CN112954481A (en) * | 2021-02-07 | 2021-06-11 | 脸萌有限公司 | Special effect processing method and device |
CN114079817A (en) * | 2020-08-20 | 2022-02-22 | 北京达佳互联信息技术有限公司 | Video special effect control method and device, electronic equipment and storage medium |
WO2023244168A3 (en) * | 2022-06-17 | 2024-02-22 | Lemon Inc. | Audio or visual input interacting with video creation |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101466010A (en) * | 2009-01-15 | 2009-06-24 | 深圳华为通信技术有限公司 | Method for playing video on mobile terminal and mobile terminal |
CN104053064A (en) * | 2013-03-14 | 2014-09-17 | 霍尼韦尔国际公司 | System and method of audio information display on video playback timeline |
CN104811787A (en) * | 2014-10-27 | 2015-07-29 | 深圳市腾讯计算机系统有限公司 | Game video recording method and game video recording device |
CN105872838A (en) * | 2016-04-28 | 2016-08-17 | 徐文波 | Sending method and device of special media effects of real-time videos |
US20160322079A1 (en) * | 2014-02-05 | 2016-11-03 | Avatar Merger Sub II, LLC | Method for real time video processing involving changing a color of an object on a human face in a video |
CN106571149A (en) * | 2015-10-07 | 2017-04-19 | 三星电子株式会社 | Electronic device and music content visualization method thereof |
CN107682642A (en) * | 2017-09-19 | 2018-02-09 | 广州艾美网络科技有限公司 | Identify the method, apparatus and terminal device of special video effect triggered time point |
CN107967706A (en) * | 2017-11-27 | 2018-04-27 | 腾讯音乐娱乐科技(深圳)有限公司 | Processing method, device and the computer-readable recording medium of multi-medium data |
CN108124101A (en) * | 2017-12-18 | 2018-06-05 | 北京奇虎科技有限公司 | Video capture method, device, electronic equipment and computer readable storage medium |
-
2018
- 2018-11-29 CN CN201811445250.0A patent/CN109495767A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101466010A (en) * | 2009-01-15 | 2009-06-24 | 深圳华为通信技术有限公司 | Method for playing video on mobile terminal and mobile terminal |
CN104053064A (en) * | 2013-03-14 | 2014-09-17 | 霍尼韦尔国际公司 | System and method of audio information display on video playback timeline |
US20160322079A1 (en) * | 2014-02-05 | 2016-11-03 | Avatar Merger Sub II, LLC | Method for real time video processing involving changing a color of an object on a human face in a video |
CN104811787A (en) * | 2014-10-27 | 2015-07-29 | 深圳市腾讯计算机系统有限公司 | Game video recording method and game video recording device |
CN106571149A (en) * | 2015-10-07 | 2017-04-19 | 三星电子株式会社 | Electronic device and music content visualization method thereof |
CN105872838A (en) * | 2016-04-28 | 2016-08-17 | 徐文波 | Sending method and device of special media effects of real-time videos |
CN107682642A (en) * | 2017-09-19 | 2018-02-09 | 广州艾美网络科技有限公司 | Identify the method, apparatus and terminal device of special video effect triggered time point |
CN107967706A (en) * | 2017-11-27 | 2018-04-27 | 腾讯音乐娱乐科技(深圳)有限公司 | Processing method, device and the computer-readable recording medium of multi-medium data |
CN108124101A (en) * | 2017-12-18 | 2018-06-05 | 北京奇虎科技有限公司 | Video capture method, device, electronic equipment and computer readable storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110798737A (en) * | 2019-11-29 | 2020-02-14 | 北京达佳互联信息技术有限公司 | Video and audio synthesis method, terminal and storage medium |
CN114079817A (en) * | 2020-08-20 | 2022-02-22 | 北京达佳互联信息技术有限公司 | Video special effect control method and device, electronic equipment and storage medium |
CN112954481A (en) * | 2021-02-07 | 2021-06-11 | 脸萌有限公司 | Special effect processing method and device |
CN112954481B (en) * | 2021-02-07 | 2023-12-12 | 脸萌有限公司 | Special effect processing method and device |
WO2023244168A3 (en) * | 2022-06-17 | 2024-02-22 | Lemon Inc. | Audio or visual input interacting with video creation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109495767A (en) | Method and apparatus for output information | |
CN104137520B (en) | A kind of information push method and device | |
CN111476871B (en) | Method and device for generating video | |
CN107147939A (en) | Method and apparatus for adjusting net cast front cover | |
CN112738634B (en) | Video file generation method, device, terminal and storage medium | |
CN110070896B (en) | Image processing method, device and hardware device | |
CN108492364A (en) | The method and apparatus for generating model for generating image | |
WO2019227429A1 (en) | Method, device, apparatus, terminal, server for generating multimedia content | |
CN109033464A (en) | Method and apparatus for handling information | |
CN108833787A (en) | Method and apparatus for generating short-sighted frequency | |
CN107977946A (en) | Method and apparatus for handling image | |
CN109255337A (en) | Face critical point detection method and apparatus | |
CN109656656A (en) | Method and apparatus for generating group chat head portrait | |
CN108882025A (en) | Video frame treating method and apparatus | |
CN110516678A (en) | Image processing method and device | |
CN108391141A (en) | Method and apparatus for output information | |
CN110472558A (en) | Image processing method and device | |
CN108632645A (en) | Information demonstrating method and device | |
CN109102484A (en) | Method and apparatus for handling image | |
CN109168012A (en) | Information processing method and device for terminal device | |
CN108881928A (en) | Method and apparatus for release information, the method and apparatus for handling information | |
CN110138654A (en) | Method and apparatus for handling voice | |
CN108985178A (en) | Method and apparatus for generating information | |
CN109543068A (en) | Method and apparatus for generating the comment information of video | |
CN108595011A (en) | Information displaying method, device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |