CN111698553A - Video processing method and device, electronic equipment and readable storage medium - Google Patents

Video processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111698553A
CN111698553A CN202010478033.2A CN202010478033A CN111698553A CN 111698553 A CN111698553 A CN 111698553A CN 202010478033 A CN202010478033 A CN 202010478033A CN 111698553 A CN111698553 A CN 111698553A
Authority
CN
China
Prior art keywords
video frame
video
target object
definition
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010478033.2A
Other languages
Chinese (zh)
Other versions
CN111698553B (en
Inventor
曾达彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010478033.2A priority Critical patent/CN111698553B/en
Publication of CN111698553A publication Critical patent/CN111698553A/en
Application granted granted Critical
Publication of CN111698553B publication Critical patent/CN111698553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a video processing method and device, electronic equipment and a readable storage medium, and belongs to the technical field of electronic equipment. The video processing method comprises the following steps: acquiring a first video frame in a first video; determining a target object among recognizable objects included in the first video frame; under the condition that the definition of the target object is smaller than the preset definition, replacing a second area of the first video frame with a first area in the second video frame to obtain a second video; the second video frame is a video frame obtained by synchronously recording the first video frame in the process of recording the first video, and the definition of the target object in the second video frame is greater than or equal to the preset definition. By using the video processing method, the video processing device, the electronic equipment and the readable storage medium provided by the embodiment of the application, the definition of the target object in the video picture can be improved and the use experience of a user can be improved in the video processing process.

Description

Video processing method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of electronic equipment, and particularly relates to a video processing method and device, electronic equipment and a readable storage medium.
Background
After shooting a video, a user often needs to perform post-processing such as video clipping on the video to improve the playing effect of the video.
However, in the process of video processing, the captured video frames are mostly clipped, but it is difficult to improve the definition of the target object in the video frame, and the user experience is further affected.
Disclosure of Invention
An embodiment of the present application provides a video processing method, an apparatus, an electronic device, and a readable storage medium, which can solve the technical problem that it is difficult to improve the definition of a target object in a video picture in a video processing process.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video processing method, where the method includes:
acquiring a first video frame in a first video;
determining a target object among recognizable objects included in the first video frame;
under the condition that the definition of the target object is smaller than the preset definition, replacing a second area of the first video frame with a first area in the second video frame to obtain a second video;
the first area and the second area are areas at least comprising target objects; the second video frame is a video frame obtained by synchronously recording the first video frame in the process of recording the first video, and the definition of the target object in the second video frame is greater than or equal to the preset definition.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the acquisition module is used for acquiring a first video frame in a first video;
a determination module for determining a target object among the recognizable objects included in the first video frame;
the replacing module is used for replacing a second area in the first video frame by using a first area in the second video frame under the condition that the definition of the target object is smaller than the preset definition to obtain a second video;
the first area and the second area are areas at least comprising target objects; the second video frame is a video frame synchronously recorded with the first video frame in the process of recording the first video, and the definition of a target object included in the second video frame is greater than or equal to the preset definition.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, by setting the preset definition, after the electronic device acquires the first video frame in the first video, the target object may be further determined among the recognizable objects in the first video frame, and when the definition of the target object is less than a preset definition, since the second video frame and the first video frame are video frames synchronously recorded by a plurality of cameras in the electronic equipment in the process of recording the first video, after replacing the second region of the first video frame with the first region of the second video frame including the target object having the definition greater than or equal to the preset definition, therefore, in the process of processing the first video, the definition of the target object can be improved, the definition of the target object in the finally obtained second video can meet the use requirement of the user, and the use experience of the user is improved.
Drawings
The present application may be better understood from the following description of specific embodiments of the application taken in conjunction with the accompanying drawings, in which like or similar reference numerals identify like or similar features.
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a video processing method according to another embodiment of the present application;
fig. 3 is a schematic flowchart of a video processing method according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method, the video processing apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
After shooting a video, a user often needs to perform video processing such as video clipping on the shot video in order to improve the playing effect of the video. However, when video processing is performed on video, the sharpness of the video frame is often affected by some video processing operations, such as scaling down or scaling up.
In order to solve the foregoing problems, embodiments of the present application provide a video processing method, an apparatus, an electronic device, and a readable storage medium, which can ensure the definition of a video frame and improve the user experience during video processing.
Fig. 1 is a schematic flowchart of a method for displaying information according to an embodiment of the present application.
In some embodiments of the present application, the method shown in fig. 1 may be performed by an electronic device, which may include, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
As shown in fig. 1, the video processing method includes:
s110, a first video frame in the first video is obtained.
S120, a target object is determined among the recognizable objects included in the first video frame.
Wherein the target object may be a person, an animal, a plant, a building, etc. in the first video frame.
In some embodiments, the recognizable object associated with the scene information of the first video frame may be determined to be the target object.
For example, when the first video frame is a dialog scene of two users, the user in the dialog state in the first video frame can be a recognizable object associated with the scene information. Thus, when the target object is determined, the user in the dialog state in the first video frame can be determined as the target object.
In other embodiments, manual selection of the target object may also be made by the user. The electronic device receives a first input of a user in a first video frame, and the electronic device may determine, in response to the first input, the recognizable object selected by the first input as the target object.
For example, after the electronic device displays the first video frame, the user may manually select the recognizable object in the first video frame as the target object.
In still other embodiments, the recognizable object associated with the scene information may be first used as the initial target object based on the scene information of the first video frame, and if the user does not satisfy the target object determined by the electronic device according to the scene information at this time, a manual selection may also be performed in the first video frame, and the electronic device may finally determine the recognizable object selected by the user as the target object.
For example, if the scene information of the first video frame is a scene in which a plurality of people play a ball together, the electronic device may be a person if an initial target object is determined according to the scene information, and at this time, if the user wants to highlight the shape of the ball, the ball may be further determined as the target object, and the ball is taken as the target object in the subsequent video frame to highlight the shape of the ball.
And S130, under the condition that the definition of the target object is smaller than the preset definition, replacing a second area of the first video frame with a first area in the second video frame to obtain a second video.
The first area and the second area are areas at least comprising a target object, the second video frame is a video frame obtained by synchronous recording with the first video frame in the process of recording the first video, and the definition of the target object included in the second video frame is greater than or equal to the preset definition.
In some embodiments of the present application, the first region may be an entire region of the second video frame, and the corresponding first region is also an entire region of the first video frame, so that the first video frame is replaced by the second video frame.
In other embodiments of the present application, the first region may be a partial region of the second video frame that includes the target object, and the corresponding second region is also a partial region of the first video frame that includes the target object, so that the definition of at least the target object in the first video frame is greater than or equal to the preset definition.
In some embodiments of the present application, the preset definition is used to ensure that the target object can be played clearly in the playing process, and the specific value of the preset definition is not limited herein and can be set according to actual needs. Because the electronic device can also synchronously record the second video frame when recording the first video, under the condition that the definition of the target object in the first video frame is less than the preset definition, the first area in the second video frame with the definition of the target object greater than or equal to the preset definition can be used for replacing the second area in the first video frame, so that a clearer picture can be obtained.
In the embodiment of the application, by setting the preset definition, after the electronic device acquires the first video frame in the first video, the target object may be further determined among the recognizable objects in the first video frame, and when the definition of the target object is less than a preset definition, since the second video frame and the first video frame are video frames obtained by synchronously recording a plurality of cameras in the electronic equipment in the process of recording the first video, after replacing the second region in the first video frame with the first region in the second video frame including the target object having the definition greater than or equal to the preset definition, therefore, in the process of processing the first video, the definition of the target object can be improved, the definition of the target object in the finally obtained second video can meet the use requirement of the user, and the use experience of the user is improved.
In some embodiments of the present application, in order to obtain a clearer video frame including a target object, before S110, a first video may be recorded by an electronic device having a plurality of cameras, including the following steps:
before a first video frame in a first video is acquired, a plurality of cameras included in the electronic equipment are used for synchronous recording, and the first video is acquired.
In some embodiments, the plurality of cameras may include: a tele camera, a wide camera, a short-focus camera, etc. Wherein, long burnt camera can gather the scenery far away, records the detail of the scenery far away, and wide angle camera then can shoot the scenery of great area, and short burnt camera then can shoot the scenery in the short distance.
Therefore, in the process of recording the video by the electronic equipment, the plurality of cameras can be used for synchronous recording, so that a plurality of synchronously recorded video frames with different recording angles can be obtained when a scene is recorded, and the video frames with different recording angles can be obtained in the subsequent video processing.
In the embodiment of the application, because videos are synchronously recorded by using a plurality of cameras in the electronic equipment, video frames of the same recognizable object at different recording angles can be acquired, and then under the condition that the definition of the target object in the first video frame is smaller than the preset definition, a first area in a second video frame of the target object with the definition greater than or equal to the preset definition can be used for replacing a second area in the first video frame, so that a clearer target object is obtained, so that in the process of processing the first video, the definition of the target object can be improved, the definition of the target object in the finally obtained second video can meet the use requirements of a user, and the use experience of the user is improved.
In order to make the definition of the video during the post-processing process clearer, in some embodiments, it is further required to identify the image information in the first video frame to obtain the identifiable object existing in the first video frame. As shown in particular in fig. 2.
As shown in fig. 2, fig. 2 is a video processing method according to another embodiment of the present application, where the video processing method includes:
s210, acquiring a first video frame in the first video.
S210 is the same as S110 in fig. 1, and is not described herein again.
S220, carrying out identification processing on the image information in the first video frame to obtain an identifiable object included in the first video frame.
The image information in the first video frame may be identified by using an Artificial Intelligence (AI) technique, so as to determine the identifiable object included in the first video frame.
S230, a target object is determined among the recognizable objects included in the first video frame.
S240, under the condition that the definition of the target object is smaller than the preset definition, replacing a second area in the first video frame with a first area in the second video frame to obtain a second video.
S230 and S240 are the same steps as S120 and S130 shown in fig. 1, and have the same technical effects, which are not described herein again.
And S250, playing the second video.
In the embodiment of the application, the image information in the first video frame is identified through the AI technology, the identifiable object existing in the first video frame can be accurately identified, and then the target object can be further determined in the identifiable object, so that under the condition that the definition of the target object in the first video frame is smaller than the preset definition, the first video frame can be replaced by the second video frame including the target object with the definition greater than or equal to the preset definition, and then the clearer target object is obtained, so that in the process of processing the first video, the definition of the target object can be improved, the definition of the target object in the finally obtained second video can meet the use requirements of a user, and the use experience of the user is improved. And in the process of playing the second video, a continuous clear picture of the target object can be obtained.
In order not to affect the sharpness of the video picture during the process of reducing or enlarging the video, another video processing method is provided in some embodiments of the present application, as shown in fig. 3 in particular.
As shown in fig. 3, fig. 3 is a schematic flowchart of a video processing method according to another embodiment of the present application. The video processing method comprises the following steps:
s310, a first video frame in the first video is obtained.
S310 is the same as S110 shown in fig. 1, and has the same technical effect, and is not described herein again.
S320, identifying the image information in the first video frame to obtain an identifiable object included in the first video frame.
S320 is the same as S220 shown in fig. 2, and has the same technical effect, and is not described herein again.
S330, receiving a second input of the user in the first video frame.
And S340, responding to the second input, reducing or enlarging the first video frame to obtain a third video frame.
And S350, judging whether the definition of the target object included in the third video frame is smaller than the preset definition.
And S360, under the condition that the definition of the target object included in the third video frame is smaller than the preset definition, replacing a second area in the third video frame with the first area in the second video frame to obtain a second video.
Wherein the first region and the second region are regions including at least the target object.
S370, in a case that the definition of the target object included in the third video frame is greater than or equal to the preset definition, determining whether the display position of the target object included in the third video frame is at the preset position.
In some embodiments of the present application, the preset position may be a center position of the playing screen, so after the first video frame is enlarged or reduced, it may also be determined whether a target object included in the obtained third video frame is displayed at the preset position.
And S380, under the condition that the display position of the target object included in the third video frame is not at the preset position, adjusting the display position of the target object included in the third video frame to the preset position to obtain a third video.
In some embodiments of the present application, if the target object included in the third video frame obtained after the first video frame is reduced or enlarged is not at the preset position, the display position of the target object may also be adjusted to the preset position. For example, the display position of the target object is adjusted to the center position of the playing screen to achieve better playing effect.
S390, recording the third video when the display position of the target object included in the third video frame is at the preset position.
In some embodiments of the present application, if the target object included in the third video frame obtained after the first video frame is reduced or enlarged is already at the preset position, the second video can be recorded without adjusting the display position of the target object.
In the embodiment of the application, the target zooming information determined based on the definition associated with the scene information of the video frame and the display size of the recognizable object in the video frame enables a user to zoom in or out the video frame without continuously adjusting the zooming ratio of the video frame according to the definition of a certain object in the video frame, but the first video frame is directly zoomed in or out according to the target zooming information corresponding to the target control, so that the video frame with the definition greater than or equal to the preset definition can be obtained. And under the condition that the target object in the third video frame obtained after the reduction or the amplification is not at the preset position, the display position of the target object can be adjusted to the preset position of the playing picture, so that the definition of the processed video frame can be ensured, and the display position of the target object can be determined at the proper position of the playing picture, thereby improving the subsequent playing effect.
In other embodiments of the application, a target control corresponding to an identifiable object in a first video frame is also displayed in a preset display area of the electronic device; in some embodiments, the preset display area may be in a clip frame of the video or a peripheral area of the recognizable object on the first video frame. The target control can be a control for displaying the zooming information of the recognizable object, and the user can determine the zooming magnification of the recognizable object through the target control.
As such, S330 may further include the steps of:
receiving a second input of the target control by the user;
and responding to the second input, zooming out or magnifying the first video frame according to the target zooming information associated with the target control, and obtaining a third video frame.
Wherein the target zoom information may be a zoom magnification of the target object.
In some embodiments, the target zoom information may be determined based on a sharpness associated with the scene information in the first video frame and a display size of the recognizable object in the first video frame.
For example, the scene information may be associated with the sharpness in advance. For example, it may be that a dialog scene is associated with definition a, a game scene is associated with definition B, and so on. As such, the sharpness may be determined based directly on the scene information of the first video frame at a subsequent time when determining the target zoom information associated with the target control.
Next, when the scene information in the first video frame is a dialog scene, determining that the definition associated with the dialog scene is a, and the display size occupied by the recognizable object in the first video frame is B, weighting and summing a and B, that is, obtaining the corresponding scaling factor. Wherein, A can be a definition range or a specific numerical value.
In other embodiments, the preset area may display not only the target control but also object information of the recognizable object. For example, in the case where the recognizable object is a person, information indicating characteristics of the person, such as the height of the person, may be displayed. For example, when the recognizable object is an object such as a ball or a tree, information indicating the characteristics of the object, such as the type of the object, may be displayed.
Therefore, the target zooming information determined by the definition associated with the scene information of the video frame and the display size of the recognizable object in the video frame is used, so that when a user zooms in or zooms in the video frame, the user does not need to continuously adjust the zooming magnification of the video frame according to the definition of a certain object in the video frame, but directly zooms in or zooms in the first video frame according to the target zooming information corresponding to the target control, the video frame with the definition larger than or equal to the preset definition can be obtained, the operation of the user is simple, and the use experience of the user is better.
It should be noted that, in the method for video processing provided in the embodiment of the present application, the execution subject may be a video processing apparatus, or a control module in the video processing apparatus for executing the method for loading video processing. In the embodiment of the present application, a method for executing a loaded video processing by a video processing apparatus is taken as an example, and a video processing method provided in the embodiment of the present application is described.
Fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus includes:
an obtaining module 410, configured to obtain a first video frame in a first video;
a determining module 420 for determining a target object among the recognizable objects included in the first video frame;
a replacing module 430, configured to replace a second region in the first video frame with a first region in the second video frame to obtain a second video frame when the definition of the target object is smaller than a preset definition;
wherein the first region and the second region are regions including at least a target object; the second video frame is a video frame obtained by synchronously recording the first video frame in the process of recording the first video, and the definition of a target object included in the second video frame is greater than or equal to the preset definition.
In the embodiment of the application, by setting the preset definition, after the electronic device acquires the first video frame in the first video, the target object may be further determined among the recognizable objects in the first video frame, and when the definition of the target object is less than a preset definition, since the second video frame and the first video frame are video frames synchronously recorded by a plurality of cameras in the electronic equipment in the process of recording the first video, after replacing the second region of the first video frame with the first region of the second video frame including the target object having the definition greater than or equal to the preset definition, therefore, in the process of processing the first video, the definition of the target object can be improved, the definition of the target object in the finally obtained second video can meet the use requirement of the user, and the use experience of the user is improved.
In some embodiments of the present application, the determination module is specifically configured to determine an identifiable object associated with the scene information of the first video frame as the target object.
In some embodiments of the present application, the determining module further comprises:
a first receiving unit for receiving a first input of a user;
and the first determination unit is used for responding to the first input and determining the recognizable object selected by the first input as the target object.
In some embodiments of the present application, the video processing apparatus further comprises:
the first receiving module is used for receiving a second input of a user in the first video frame before the second video is obtained by replacing the second area of the first video frame with the first area of the second video frame;
the first processing module is used for responding to the second input and reducing or amplifying the first video frame to obtain a third video frame;
the replacing module is specifically configured to replace a second region of the third video frame with a first region of the second video frame when the definition of a target object included in the third video frame is smaller than a preset definition.
In some embodiments of the present application, the video processing apparatus further comprises:
and the second processing module is used for adjusting the display position of the target object included in the third video frame to a preset position to obtain a third video under the condition that the definition of the target object included in the third video frame is greater than or equal to the preset definition and the display position of the target object included in the third video frame is not at the preset position after responding to the second input and reducing or enlarging the first video frame to obtain the third video frame.
In some embodiments of the present application, the video processing apparatus further comprises:
the display module displays a target control corresponding to the recognizable object in a preset area;
the second processing module is specifically configured to respond to the second input, and reduce or enlarge the first video frame according to the target scaling information associated with the target control to obtain a third video frame;
wherein the target scaling information is determined based on the target definition and a display size of the recognizable object in the first video frame; the target sharpness is the sharpness associated with the scene information of the first video frame.
In some embodiments of the present application, the video processing apparatus further comprises:
and the recording module is used for synchronously recording by utilizing a plurality of cameras included by the electronic equipment before acquiring the first video frame in the first video to obtain the first video.
Each module of the video processing apparatus provided in the embodiment of the present application has a function of implementing the video processing method/step in the embodiment shown in fig. 1 to 3, and can achieve the technical effect corresponding to the embodiment shown in fig. 1 to 3, and is not described herein again for brevity.
The video processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a kiosk, and the like, and the embodiments of the present application are not particularly limited.
The video processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the video processing apparatus in the method embodiments of fig. 1 to fig. 3, and for avoiding repetition, details are not repeated here.
Optionally, an embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction, when executed by the processor, implements each process of the video processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1010 is configured to obtain a first video frame in a first video; determining a target object among recognizable objects included in the first video frame; under the condition that the definition of the target object is smaller than the preset definition, replacing a second area of the first video frame with a first area in the second video frame to obtain a second video; the first area and the second area are areas at least comprising target objects; the second video frame is a video frame obtained by synchronously recording the first video frame in the process of recording the first video, and the definition of the target object in the second video frame is greater than or equal to the preset definition.
In the embodiment of the application, by setting the preset definition, after the electronic device acquires the first video frame in the first video, the target object may be further determined among the recognizable objects in the first video frame, and when the definition of the target object is less than a preset definition, since the second video frame and the first video frame are video frames synchronously recorded by a plurality of cameras in the electronic equipment in the process of recording the first video, after replacing the second region of the first video frame with the first region of the second video frame including the target object having the definition greater than or equal to the preset definition, therefore, in the process of processing the first video, the definition of the target object can be improved, the definition of the target object in the finally obtained second video can meet the use requirement of the user, and the use experience of the user is improved.
Optionally, the processor 1010 is further configured to determine an identifiable object associated with the scene information of the first video frame as the target object.
Optionally, a user input unit 1007 receiving a first input from a user;
accordingly, the processor 1010 is further configured to determine, in response to the first input, the recognizable object selected by the first input as the target object.
Optionally, the user input unit 1007 is further configured to receive a second input of the user in the first video frame before replacing the second region of the first video frame with the first region of the second video frame;
accordingly, the processor 1010 is further configured to scale down or up the first video frame in response to the second input, resulting in a third video frame.
Accordingly, the processor 1010 is further configured to replace the second region of the third video frame with the first region of the second video frame if the definition of the target object included in the third video frame is less than the preset definition.
Accordingly, the processor 1010 is further configured to, after the first video frame is reduced or enlarged to obtain the third video frame in response to the second input, adjust the display position of the target object included in the third video frame to the preset position to obtain the third video in a case that the definition of the target object included in the third video frame is greater than or equal to the preset definition and the display position of the target object included in the third video frame is not at the preset position.
In the embodiment of the present application, under the condition that the target object in the third video frame obtained after the reduction or the enlargement is not at the preset position, the display position of the target object may be further adjusted to the preset position of the playing picture, so that the processed video frame may not only ensure the definition, but also determine the display position of the target object at a suitable position of the playing picture, so as to improve the subsequent playing effect
Optionally, the display unit 1006 is further configured to display a target control corresponding to the recognizable object in the preset display area,
correspondingly, the processor 1010 is further configured to respond to the second input, and zoom in or zoom out the first video frame according to the target zoom information associated with the target control, so as to obtain a third video frame;
wherein the target scaling information is determined based on the target definition and a display size of the recognizable object in the first video frame; the target sharpness is the sharpness associated with the scene information of the first video frame.
The input unit 1004 is further configured to perform synchronous recording by using a plurality of cameras included in the electronic device before acquiring a first video frame in the first video, so as to obtain the first video.
In the embodiment of the application, because videos are synchronously recorded by using a plurality of cameras in the electronic equipment, video frames of the same recognizable object at different recording angles can be acquired, and then under the condition that the definition of the target object in the first video frame is smaller than the preset definition, the definition of the target object can be improved in the process of processing the first video after a first area in a second video frame of the target object with the definition larger than or smaller than the preset definition replaces a second area in the first video frame, so that the definition of the target object in the finally obtained second video can meet the use requirements of users, and the use experience of the users is improved.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (16)

1. A video processing method, comprising:
acquiring a first video frame in a first video;
determining a target object among recognizable objects included in the first video frame;
under the condition that the definition of the target object is smaller than the preset definition, replacing a second area of the first video frame with a first area in a second video frame to obtain a second video;
wherein the first region and the second region are regions including at least the target object; the second video frame is a video frame obtained by synchronously recording the first video frame in the process of recording the first video, and the definition of the target object in the second video frame is greater than or equal to the preset definition.
2. The method of claim 1, wherein said determining a target object among the identifiable objects included in the first video frame comprises:
determining an identifiable object associated with scene information of the first video frame as the target object.
3. The method of claim 1, wherein said determining a target object among the identifiable objects included in the first video frame comprises:
receiving a first input of a user;
in response to the first input, determining the identifiable object selected by the first input as the target object.
4. The method of claim 1, wherein before replacing the second region of the first video frame with the first region of the second video frame to obtain the second video, the method further comprises:
receiving a second input of a user in the first video frame;
in response to a second input, reducing or enlarging the first video frame to obtain a third video frame;
the replacing the second region of the first video frame with the first region of the second video frame when the definition of the target object is less than the preset definition includes:
and replacing a second area of the third video frame with a first area in the second video frame when the definition of the target object included in the third video frame is smaller than the preset definition.
5. The method of claim 4, wherein after reducing or enlarging the first video frame to obtain a third video frame in response to the second input, further comprising:
and under the condition that the definition of the target object included in the third video frame is greater than or equal to the preset definition and the display position of the target object included in the third video frame is not at the preset position, adjusting the display position of the target object included in the third video frame to the preset position to obtain a third video.
6. The method according to claim 4, wherein a target control corresponding to the recognizable object is displayed in a preset display area;
said reducing or enlarging said first video frame in response to a second input resulting in a third video frame comprises:
in response to the second input, zooming out or magnifying the first video frame according to target zooming information associated with the target control to obtain a third video frame;
wherein the target zoom information is determined based on a target sharpness and a display size of the identifiable object in the first video frame; the target sharpness is a sharpness associated with scene information of the first video frame.
7. The method of claim 1, wherein before the obtaining the first video frame in the first video, further comprising:
and synchronously recording by utilizing a plurality of cameras included in the electronic equipment to obtain the first video.
8. A video processing apparatus, comprising:
the acquisition module is used for acquiring a first video frame in a first video;
a determination module for determining a target object among identifiable objects comprised in the first video frame;
the replacing module is used for replacing a second area in the first video frame by using a first area in a second video frame under the condition that the definition of the target object is smaller than the preset definition to obtain a second video;
wherein the first region and the second region are regions including at least the target object; the second video frame is a video frame synchronously recorded with the first video frame in the process of recording the first video, and the definition of the target object included in the second video frame is greater than or equal to the preset definition.
9. The apparatus of claim 8, wherein the determination module is specifically configured to determine an identifiable object associated with scene information of the first video frame as the target object.
10. The apparatus of claim 8, wherein the determining module further comprises:
a first receiving unit for receiving a first input of a user;
a first determination unit, configured to determine, in response to the first input, the identifiable object selected by the first input as the target object.
11. The apparatus of claim 8, wherein the video processing apparatus further comprises:
the first receiving module is used for receiving a second input of a user in a first video frame before a second video is obtained by replacing a second area of the first video frame with a first area in the second video frame;
the first processing module is used for responding to a second input and reducing or amplifying the first video frame to obtain a third video frame;
the replacing module is specifically configured to replace a second region of the third video frame with a first region of the second video frame when the definition of the target object included in the third video frame is smaller than the preset definition.
12. The apparatus of claim 11, wherein the video processing apparatus further comprises:
and the second processing module is used for adjusting the display position of the target object included in the third video frame to the preset position to obtain a third video under the condition that the definition of the target object included in the third video frame is greater than or equal to the preset definition and the display position of the target object included in the third video frame is not at the preset position after the first video frame is reduced or enlarged in response to the second input to obtain the third video frame.
13. The apparatus of claim 11, wherein the video processing apparatus further comprises:
the display module displays the target control corresponding to the recognizable object in a preset area;
the second processing module is specifically configured to respond to the second input, and reduce or enlarge the first video frame according to target scaling information associated with the target control to obtain the third video frame;
wherein the target zoom information is determined based on a target sharpness and a display size of the identifiable object in the first video frame; the target sharpness is a sharpness associated with scene information of the first video frame.
14. The apparatus of claim 1, wherein the video processing apparatus further comprises:
and the recording module is used for synchronously recording by utilizing a plurality of cameras included by the electronic equipment before the first video frame in the first video is acquired, so as to obtain the first video.
15. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video processing method according to any one of claims 1 to 7.
16. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to any one of claims 1 to 7.
CN202010478033.2A 2020-05-29 2020-05-29 Video processing method and device, electronic equipment and readable storage medium Active CN111698553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010478033.2A CN111698553B (en) 2020-05-29 2020-05-29 Video processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478033.2A CN111698553B (en) 2020-05-29 2020-05-29 Video processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111698553A true CN111698553A (en) 2020-09-22
CN111698553B CN111698553B (en) 2022-09-27

Family

ID=72478940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478033.2A Active CN111698553B (en) 2020-05-29 2020-05-29 Video processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111698553B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954212A (en) * 2021-02-08 2021-06-11 维沃移动通信有限公司 Video generation method, device and equipment
CN113014817A (en) * 2021-03-04 2021-06-22 维沃移动通信有限公司 Method and device for acquiring high-definition high-frame video and electronic equipment
CN113207038A (en) * 2021-04-21 2021-08-03 维沃移动通信(杭州)有限公司 Video processing method, video processing device and electronic equipment
CN113225451A (en) * 2021-04-28 2021-08-06 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN113613024A (en) * 2021-08-09 2021-11-05 北京金山云网络技术有限公司 Video preprocessing method and device
CN113852757A (en) * 2021-09-03 2021-12-28 维沃移动通信(杭州)有限公司 Video processing method, device, equipment and storage medium
CN113852756A (en) * 2021-09-03 2021-12-28 维沃移动通信(杭州)有限公司 Image acquisition method, device, equipment and storage medium
CN113965771A (en) * 2021-10-22 2022-01-21 成都天翼空间科技有限公司 VR live broadcast user interactive experience system
CN115546043A (en) * 2022-03-31 2022-12-30 荣耀终端有限公司 Video processing method and related equipment
WO2023000787A1 (en) * 2021-07-20 2023-01-26 苏州景昱医疗器械有限公司 Video processing method and apparatus, electronic device, and computer readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025628A1 (en) * 2004-10-26 2008-01-31 Koninklijke Philips Electronics, N.V. Enhancement of Blurred Image Portions
CN101420508A (en) * 2008-12-02 2009-04-29 西安交通大学 Content related image scaling method
CN101783900A (en) * 2010-03-10 2010-07-21 华为终端有限公司 Method and device thereof for zooming image in partitions
CN105120169A (en) * 2015-09-01 2015-12-02 联想(北京)有限公司 Information processing method and electronic equipment
WO2016004595A1 (en) * 2014-07-09 2016-01-14 Splunk Inc. Minimizing blur operations for creating a blur effect for an image
CN105578275A (en) * 2015-12-16 2016-05-11 小米科技有限责任公司 Video display method and apparatus
CN107707825A (en) * 2017-11-27 2018-02-16 维沃移动通信有限公司 A kind of panorama shooting method, mobile terminal and computer-readable recording medium
CN107786827A (en) * 2017-11-07 2018-03-09 维沃移动通信有限公司 Video capture method, video broadcasting method, device and mobile terminal
CN109803172A (en) * 2019-01-03 2019-05-24 腾讯科技(深圳)有限公司 A kind of processing method of live video, device and electronic equipment
CN110084775A (en) * 2019-05-09 2019-08-02 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110099213A (en) * 2019-04-26 2019-08-06 维沃移动通信(杭州)有限公司 A kind of image display control method and terminal
CN110266994A (en) * 2019-06-26 2019-09-20 广东小天才科技有限公司 A kind of video call method, video conversation apparatus and terminal
CN110798709A (en) * 2019-11-01 2020-02-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic device
CN110796664A (en) * 2019-10-14 2020-02-14 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110992284A (en) * 2019-11-29 2020-04-10 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and computer-readable storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025628A1 (en) * 2004-10-26 2008-01-31 Koninklijke Philips Electronics, N.V. Enhancement of Blurred Image Portions
CN101420508A (en) * 2008-12-02 2009-04-29 西安交通大学 Content related image scaling method
CN101783900A (en) * 2010-03-10 2010-07-21 华为终端有限公司 Method and device thereof for zooming image in partitions
WO2016004595A1 (en) * 2014-07-09 2016-01-14 Splunk Inc. Minimizing blur operations for creating a blur effect for an image
CN105120169A (en) * 2015-09-01 2015-12-02 联想(北京)有限公司 Information processing method and electronic equipment
CN105578275A (en) * 2015-12-16 2016-05-11 小米科技有限责任公司 Video display method and apparatus
CN107786827A (en) * 2017-11-07 2018-03-09 维沃移动通信有限公司 Video capture method, video broadcasting method, device and mobile terminal
CN107707825A (en) * 2017-11-27 2018-02-16 维沃移动通信有限公司 A kind of panorama shooting method, mobile terminal and computer-readable recording medium
CN109803172A (en) * 2019-01-03 2019-05-24 腾讯科技(深圳)有限公司 A kind of processing method of live video, device and electronic equipment
CN110099213A (en) * 2019-04-26 2019-08-06 维沃移动通信(杭州)有限公司 A kind of image display control method and terminal
CN110084775A (en) * 2019-05-09 2019-08-02 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110266994A (en) * 2019-06-26 2019-09-20 广东小天才科技有限公司 A kind of video call method, video conversation apparatus and terminal
CN110796664A (en) * 2019-10-14 2020-02-14 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110798709A (en) * 2019-11-01 2020-02-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic device
CN110992284A (en) * 2019-11-29 2020-04-10 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, electronic device, and computer-readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
P. PREMARATNE; C.C. KO: "Parametric modeling of blurred images for image restoration", 《CONFERENCE RECORD OF THE THIRTY-FOURTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS (CAT. NO.00CH37154)》 *
谯从彬等: "基于运动分割的视频去模糊", 《计算机辅助设计与图形学学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954212A (en) * 2021-02-08 2021-06-11 维沃移动通信有限公司 Video generation method, device and equipment
CN113014817B (en) * 2021-03-04 2022-11-29 维沃移动通信有限公司 Method and device for acquiring high-definition high-frame video and electronic equipment
CN113014817A (en) * 2021-03-04 2021-06-22 维沃移动通信有限公司 Method and device for acquiring high-definition high-frame video and electronic equipment
CN113207038A (en) * 2021-04-21 2021-08-03 维沃移动通信(杭州)有限公司 Video processing method, video processing device and electronic equipment
CN113207038B (en) * 2021-04-21 2023-04-28 维沃移动通信(杭州)有限公司 Video processing method, video processing device and electronic equipment
CN113225451A (en) * 2021-04-28 2021-08-06 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
WO2023000787A1 (en) * 2021-07-20 2023-01-26 苏州景昱医疗器械有限公司 Video processing method and apparatus, electronic device, and computer readable storage medium
CN113613024A (en) * 2021-08-09 2021-11-05 北京金山云网络技术有限公司 Video preprocessing method and device
CN113852756A (en) * 2021-09-03 2021-12-28 维沃移动通信(杭州)有限公司 Image acquisition method, device, equipment and storage medium
CN113852757A (en) * 2021-09-03 2021-12-28 维沃移动通信(杭州)有限公司 Video processing method, device, equipment and storage medium
CN113965771A (en) * 2021-10-22 2022-01-21 成都天翼空间科技有限公司 VR live broadcast user interactive experience system
CN115546043A (en) * 2022-03-31 2022-12-30 荣耀终端有限公司 Video processing method and related equipment
CN115546043B (en) * 2022-03-31 2023-08-18 荣耀终端有限公司 Video processing method and related equipment thereof

Also Published As

Publication number Publication date
CN111698553B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN111698553B (en) Video processing method and device, electronic equipment and readable storage medium
CN112714255B (en) Shooting method and device, electronic equipment and readable storage medium
US20220417417A1 (en) Content Operation Method and Device, Terminal, and Storage Medium
CN105320695A (en) Picture processing method and device
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
EP3128411A1 (en) Interface display method and device
CN111835982B (en) Image acquisition method, image acquisition device, electronic device, and storage medium
CN106249508A (en) Atomatic focusing method and system, filming apparatus
CN111866392A (en) Shooting prompting method and device, storage medium and electronic equipment
CN113014798A (en) Image display method and device and electronic equipment
CN112954212B (en) Video generation method, device and equipment
CN112887609A (en) Shooting method, shooting device, electronic equipment and storage medium
CN112532808A (en) Image processing method and device and electronic equipment
CN113411498A (en) Image shooting method, mobile terminal and storage medium
CN113873166A (en) Video shooting method and device, electronic equipment and readable storage medium
CN109358927B (en) Application program display method and device and terminal equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN114390197A (en) Shooting method and device, electronic equipment and readable storage medium
CN116188343A (en) Image fusion method and device, electronic equipment, chip and medium
CN113038218B (en) Video screenshot method, device, equipment and readable storage medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN112887606B (en) Shooting method and device and electronic equipment
CN115278047A (en) Shooting method, shooting device, electronic equipment and storage medium
CN113473012A (en) Virtualization processing method and device and electronic equipment
US11600300B2 (en) Method and device for generating dynamic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant