CN110839174A - Image processing method and device, computer equipment and storage medium - Google Patents

Image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110839174A
CN110839174A CN201911216201.4A CN201911216201A CN110839174A CN 110839174 A CN110839174 A CN 110839174A CN 201911216201 A CN201911216201 A CN 201911216201A CN 110839174 A CN110839174 A CN 110839174A
Authority
CN
China
Prior art keywords
image
target
video frame
size
screen video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911216201.4A
Other languages
Chinese (zh)
Inventor
姚俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201911216201.4A priority Critical patent/CN110839174A/en
Publication of CN110839174A publication Critical patent/CN110839174A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • H04N21/440272Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA for performing aspect ratio conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26291Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for providing content or additional data updates, e.g. updating software modules, stored at the client

Abstract

The application discloses an image processing method and device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring a horizontal screen video frame of a horizontal screen video, and intercepting an area image in the horizontal screen video frame; generating a target image with an image size of a target size based on the region image; and placing the target image on the lower layer, placing the horizontal screen video frame on the upper layer, aligning the vertical edges of the horizontal screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame. The method and the device can avoid the waste of display resources.

Description

Image processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for image processing, a computer device, and a storage medium.
Background
With the development of the short video industry, more and more people begin to use short videos to record their lives.
In order to obtain a better impression, a user generally selects a horizontal screen mode to shoot a short video, and the short video is sent to a network after shooting is completed. Other users can click the short video to watch on the network, the mobile phone can detect the short video when watching, and when detecting that the resolution of the short video represents the transverse screen short video, the transverse screen short video can be displayed in the middle of the vertical screen.
In the process of implementing the present application, the inventor finds that the prior art has at least the following problems:
when the short video is played, because the transverse screen short video is displayed in the middle of the vertical screen, black edges can appear around the short video, thereby wasting display resources.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, computer equipment and a storage medium, which can solve the problem of display resource waste. The technical scheme is as follows:
in one aspect, a method of image processing is provided, the method comprising:
acquiring a horizontal screen video frame of a horizontal screen video, and intercepting an area image in the horizontal screen video frame;
generating a target image with an image size of a target size based on the area image, wherein the target size is the size of a preset vertical screen image frame;
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
Optionally, in the cross-screen video frame, intercepting a region image includes:
determining the height of the horizontal screen video frame as a target height;
determining a target width based on the target height, wherein the ratio of the target height to the target width is equal to the ratio of height to width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
Optionally, in the cross-screen video frame, intercepting a region image includes:
determining the size of a region image corresponding to the size of the currently acquired transverse screen video frame based on the corresponding relation between the size of the pre-stored transverse screen video frame and the size of the region image, wherein the determined size of the region image comprises a target height and a target width, and the ratio of the height to the width in the size of each region image in the corresponding relation is equal to the ratio of the height to the width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
Optionally, the intercepting an area image in the cross-screen video frame based on the target height and the target width includes:
and intercepting a region image at the middle position of the horizontal screen video frame based on the target height and the target width.
Optionally, the generating a target image with an image size of a target size based on the region image includes:
and zooming the area image based on the ratio of the height in the preset size of the vertical screen image frame to the target height to generate a target image with the image size being the target size.
Optionally, the step of placing the target image in a lower layer, placing the horizontal screen video frame in an upper layer, aligning the vertical edges of the horizontal screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame includes:
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, aligning the middle positions of the transverse screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
In another aspect, an apparatus for image processing is provided, the apparatus comprising:
the device comprises an intercepting module, a display module and a display module, wherein the intercepting module is used for acquiring a transverse screen video frame of a transverse screen video and intercepting an area image in the transverse screen video frame;
the generating module is used for generating a target image with the image size being a target size based on the area image, wherein the target size is the size of a preset vertical screen image frame;
and the synthesis module is used for placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, and synthesizing the images to obtain a synthesized vertical screen video frame.
Optionally, the intercepting module is configured to:
determining the height of the horizontal screen video frame as a target height;
determining a target width based on the target height, wherein the ratio of the target height to the target width is equal to the ratio of height to width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
Optionally, the intercepting module is configured to:
determining the size of a region image corresponding to the size of the currently acquired transverse screen video frame based on the corresponding relation between the size of the pre-stored transverse screen video frame and the size of the region image, wherein the determined size of the region image comprises a target height and a target width, and the ratio of the height to the width in the size of each region image in the corresponding relation is equal to the ratio of the height to the width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
Optionally, the intercepting module is configured to:
and intercepting a region image at the middle position of the horizontal screen video frame based on the target height and the target width.
Optionally, the generating module is configured to:
and zooming the area image based on the ratio of the height in the preset size of the vertical screen image frame to the target height to generate a target image with the image size being the target size.
Optionally, the synthesis module is configured to:
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, aligning the middle positions of the transverse screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
In yet another aspect, a computer device is provided that includes one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform an operation performed by the method of image processing.
In yet another aspect, a computer-readable storage medium having at least one instruction stored therein is provided, which is loaded and executed by a processor to perform operations performed by the method of image processing.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the size of the vertical screen image frame is the target image, and the size of the horizontal screen video frame is smaller than that of the target image, so that the size of the synthesized vertical screen video frame is the size of the vertical screen image frame, and the size of the vertical screen image frame is the size of the mobile phone when the vertical screen is played, and therefore waste of display resources is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
FIG. 3 is a flow chart of a method of image processing provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a synthesized vertical video frame in an image processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The embodiment of the application provides an image processing method, which can be realized by a terminal or a server. The terminal can be a mobile phone, a desktop computer, a tablet computer, a notebook computer and the like, and the terminal can be provided with a screen and other components. The terminal can have the functions of playing images, playing audio and processing images. Applications, such as short video applications, may be installed in the terminal. The server can be a background server of the application program, and the server can be communicated with the terminal. The server may be a single server or a server group, if the server is a single server, the server may be responsible for all processing that needs to be performed by the server in the following scheme, if the server is a server group, different servers in the server group may be responsible for different processing in the following scheme, respectively, and the specific processing allocation condition may be set arbitrarily by a technical person according to actual needs, and is not described herein again.
The terminal can be provided with a short video application program, a user can select a dynamic interface after the terminal clicks the short video application program, the terminal displays the interface, the dynamic interface is shown in figure 1, the dynamic interface displays a focus interface by default, the user can enter the focus interface by clicking a focus control, the dynamic state of the user focused by the user can be displayed in the focus interface, the user can click a favorite control at the upper left corner to support the short video, and the operation is generally known as 'like'. The user can also select controls such as 'home page', 'good singing voice', 'my' and the like to respectively enter the interface corresponding to the control, the interfaces corresponding to the 'home page' control and the 'good singing voice' control are similar to the 'attention interface', the 'my interface' can display personal information, account numbers and the like of the user, and the user can quit the account number for logging in or change the account number on the interface.
The terminal and the server may be installed with a short video application program, as shown in fig. 2, a user may shoot a short video through the short video application program, or introduce the short video stored in the terminal into the short video application program, after the short video is obtained, the user may upload the short video to the server through the network, after uploading to the server, the server may send a data update message to all terminals, the update message includes a play control of the short video, when other users trigger the play control, the short video may be transmitted to the terminal of the user, and the user may view the short video through the short video application program.
Fig. 3 is a flowchart of a method for image processing according to an embodiment of the present application. Referring to fig. 3, the process includes:
step 301, acquiring a horizontal screen video frame of a horizontal screen video, and capturing an area image in the horizontal screen video frame.
The horizontal screen video can be a horizontal screen live video and can also be a horizontal screen short video.
In implementation, the method may be applied to a terminal or a server for uploading a video, and first, a horizontal screen video frame of a horizontal screen video is obtained, where the horizontal screen video frame may also be a video frame of another horizontal screen video or a picture, and a short video application detects a size of the horizontal screen video frame, where the size includes a height and a width, where the height and the width respectively represent the number of pixels included in the horizontal screen video frame in the longitudinal direction and the horizontal direction, and the height and the width may be represented by a resolution. After obtaining the size of the horizontal screen video frame, the area image can be intercepted in the following ways:
in the first mode, the height of a horizontal screen video frame is determined as a target height, then a target width is determined based on the target height, and then an area image is intercepted in the horizontal screen video frame based on the target height and the target width.
And the ratio of the target height to the target width is equal to the ratio of the height to the width in the preset size of the vertical screen image frame.
Firstly, determining the height of a horizontal screen video frame as a target height, secondly, calculating the ratio of the height to the width in the size of a preset vertical screen image frame, then calculating the corresponding target width according to the ratio and the target height, then determining the center of an area image to be intercepted based on the obtained target height and the target width, determining the center of the horizontal screen video frame based on the height and the width of the horizontal screen video frame, wherein the center can be the intersection point of the diagonal lines of the area image to be intercepted and the intersection point of the diagonal lines of the horizontal screen video frame respectively, and intercepting the area image based on the obtained target height, the target width and the two centers after the two centers are obtained.
For example, the resolution of the horizontal video frame is 500 × 200, that is, the height is 200, the width is 500, the resolution of the preset vertical image frame is 200 × 400, that is, the height is 400, the width is 200, the ratio 2 is calculated, first, the height 200 of the horizontal video frame is taken as a target height, then the target width is 100 based on the ratio 2, after the target height and the target width are obtained, the center of the region image to be clipped is determined based on the obtained target height and the target width, and the center of the horizontal video frame is determined based on the height and the width of the horizontal video frame, which may be the intersection point of the diagonal lines of the region image to be clipped and the intersection point of the diagonal lines of the horizontal video frame, respectively, after the two centers are obtained, the two centers may be aligned and then the resolution is 100 × 200 according to the target width 100 and the target height 200, and the width is 500 for the height 200, namely, the horizontal screen video frame with the resolution of 500 x 200 is intercepted to obtain a regional image.
In the second mode, the size of the area image corresponding to the size of the currently acquired horizontal screen video frame is determined based on the correspondence between the size of the horizontal screen video frame and the size of the area image, which is stored in advance.
The determined size of the area image comprises a target height and a target width, and the ratio of the height to the width in the size of each area image in the corresponding relation is equal to the ratio of the height to the width in the size of a preset vertical screen image frame.
Firstly, the memory stores the corresponding relation between the size of the horizontal screen video frame and the size of the area image, and the ratio of the height to the width in the size of each area image in the corresponding relation is equal to the ratio of the height to the width in the size of the preset vertical screen image frame. And determining the size of the area image corresponding to the size of the currently acquired transverse screen video frame based on the corresponding relation between the pre-stored size of the transverse screen video frame and the size of the area image, wherein the size of the area image can be the target height and the target width.
For example, the resolution of a horizontal video frame is 1024 × 768, that is, the height is 768 and the width is 1024, the resolution of a region image corresponding to the region image stored in the memory is 54 × 96, and 54 × 96 is the resolution of the region image, that is, the target height is 96 and the target width is 54.
Secondly, after the target height and the target width are obtained, the short video application program intercepts the area image in the horizontal screen video frame.
The method can also be applied to a terminal for watching the video, a user wants to watch a certain transverse screen video at the terminal for watching the video, clicks a corresponding playing control, a short video application program of the terminal for watching the video firstly detects whether the mode of using the mobile phone by the user is a transverse screen or a vertical screen, if the mode is the transverse screen, the transverse screen video is played normally, if the mode is the vertical screen, a transverse screen video frame of the transverse screen video is obtained from a server, the resolution of the transverse screen video frame can be the default resolution of a server background, then the resolution of a pre-stored area image is inquired in a memory, the resolution of the area image can be the default resolution of an area image of the terminal for watching the video, and then the area image is intercepted based on the default resolution of the area image.
For example, the default resolution of the horizontal screen video frame transmitted to the server background is 1024 × 768, the server transmits the horizontal screen video frame to the terminal for watching the video, the terminal for watching the video queries the memory to obtain the default resolution of the region image of the terminal for watching the video, namely the resolution of the region image, the default resolution is 54 × 96, and then the short video application program intercepts the region image with the default resolution of 54 × 96 from the horizontal screen video frame with the default resolution of 1024 × 768.
Step 302, based on the region image, generates a target image having an image size of a target size.
And the target size is the size of a preset vertical screen image frame.
In implementation, based on a ratio of a height in a preset size of the vertical screen image frame to a target height, the area image is scaled to generate a target image with an image size of a target size, and the processing may include the following steps:
first, if the image size of the region image is smaller than the image size of a preset vertical screen video frame, the enlargement process is performed.
When the resolution of the captured region image is smaller than the resolution of the target image, that is, the target width and the target height of the region image are smaller than the width and the height of the target image, the short video application program performs amplification processing on the region image, and the amplification processing can be performed in a point supplementing mode, that is, the region image is input into a point supplementing function, and the point supplementing function performs point supplementing on the region image, so that the resolution of the output target image is the same as the resolution of a preset vertical screen video frame.
For example, the resolution of the horizontal screen video frame may be 1024 × 768, the resolution of the area image corresponding to the resolution of the horizontal screen video frame may be 54 × 96, the resolution of the preset vertical screen video frame may be 540 × 960, that is, the resolution of the area image is smaller than the resolution of the preset vertical screen video frame, then the ratio of the resolution 540 × 960 of the preset vertical screen video frame to the resolution 54 × 96 of the area image is 10, then each pixel point in the area image is replicated into 10 pixels, and the pixels are arranged according to the corresponding rule, so as to obtain the target image with the resolution of 540 × 960.
Secondly, if the image size of the area image is equal to the image size of the preset vertical screen video frame, the zooming processing is not carried out.
And when the resolution of the intercepted area image is equal to the resolution of the preset vertical screen video frame, the terminal does not process the area image.
For example, the resolution of a horizontal screen video frame is 1024 × 768, the resolution of a region image corresponding to the resolution of the horizontal screen video frame is 540 × 960, the resolution of a preset vertical screen video frame is 540 × 960, that is, the resolution of the region image is equal to the resolution of the preset vertical screen video frame, and then the ratio of the resolution 540 × 960 of the preset vertical screen video frame to the resolution 540 × 960 of the region image is 1, that is, a pixel is not copied, and then the captured region image is the target image.
Thirdly, if the image size of the area image is larger than the image size of the preset vertical screen video frame, reducing processing is carried out.
When the resolution of the captured region image is greater than the resolution of the preset vertical screen video frame, the terminal can perform reduction processing on the captured region image, the reduction can be performed in a point removing mode, namely, some pixel points are removed from the whole region image, so that the resolution of the region image after point removal is equal to the resolution of the preset vertical screen video frame, and the target image is obtained.
For example, the resolution of a horizontal screen video frame is 1024 × 768, the resolution of an area image corresponding to the resolution of the horizontal screen video frame is 594 × 1056, the resolution of a preset vertical screen video frame is 540 × 960, that is, the resolution of the area image is greater than the resolution of the preset vertical screen video frame, the difference between the resolution of the area image 594 × 1056 and the resolution of the preset vertical screen video frame 540 × 96 is 54 × 96, that is, 5184 pixels are added, and then the short video application program removes the 5184 pixels based on a de-dotting function, so as to obtain a target image with the resolution of 540 × 960.
And 303, placing the target image on the lower layer, placing the horizontal screen video frame on the upper layer, aligning the vertical edges of the horizontal screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
In practice, after the target image is obtained through the above steps, the short video application may process the target image and then place the processed target image on a lower layer, for example, blur the obtained target image, that is, input the target image into a blur function, so as to obtain a blurred target image, and place the blurred target image on the lower layer. Then, a horizontal screen video frame is arranged on an upper layer, the horizontal screen video frame can also be a video frame of other horizontal screen videos or a picture, then the horizontal screen video frame is aligned with the vertical edge of the target image, the alignment of the middle position of the horizontal screen video frame and the target image is realized by enabling the diagonal intersection point of the horizontal screen video frame to coincide with the diagonal intersection point of the target image, and then the superposed horizontal screen video frame and the target image are subjected to image synthesis to obtain a synthesized vertical screen video frame.
The above steps are processes for one horizontal screen video frame, and the processing of other videos is similar to the above steps, which are not repeated herein.
After the operation, when a user clicks a certain transverse screen short video in a short video application program, the screen can display a picture as shown in fig. 4, a transverse screen video is embedded in the middle of a vertical screen video, the vertical screen video can be changed according to the change of the transverse screen video picture, the vertical screen video can also display a certain preset picture all the time, and the picture can be provided with characters, a picture or a preset vertical screen video.
The size of the vertical screen image frame is the target image, and the size of the horizontal screen video frame is smaller than that of the target image, so that the size of the synthesized vertical screen video frame is the size of the vertical screen image frame, and the size of the vertical screen image frame is the size of the mobile phone when the vertical screen is played, and therefore waste of display resources is avoided.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 5 is a schematic diagram of an apparatus for image processing according to an embodiment of the present application. Referring to fig. 5, the embodiment includes:
an intercepting module 510, configured to acquire a horizontal screen video frame of a horizontal screen video, and intercept an area image in the horizontal screen video frame;
a generating module 520, configured to generate a target image with an image size of a target size based on the region image, where the target size is a size of a preset vertical screen image frame;
and a synthesizing module 530, configured to place the target image in a lower layer, place the horizontal screen video frame in an upper layer, align the vertical edges of the horizontal screen video frame and the target image, and perform image synthesis to obtain a synthesized vertical screen video frame.
Optionally, the intercepting module 510 is configured to:
determining the height of the horizontal screen video frame as a target height;
determining a target width based on the target height, wherein the ratio of the target height to the target width is equal to the ratio of height to width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
Optionally, the intercepting module 510 is configured to:
determining the size of a region image corresponding to the size of the currently acquired transverse screen video frame based on the corresponding relation between the size of the pre-stored transverse screen video frame and the size of the region image, wherein the determined size of the region image comprises a target height and a target width, and the ratio of the height to the width in the size of each region image in the corresponding relation is equal to the ratio of the height to the width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
Optionally, the intercepting module 510 is configured to:
and intercepting a region image at the middle position of the horizontal screen video frame based on the target height and the target width.
Optionally, the generating module 520 is configured to:
and zooming the area image based on the ratio of the height in the preset size of the vertical screen image frame to the target height to generate a target image with the image size being the target size.
Optionally, the synthesizing module 530 is configured to:
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, aligning the middle positions of the transverse screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
The size of the vertical screen image frame is the target image, and the size of the horizontal screen video frame is smaller than that of the target image, so that the size of the synthesized vertical screen video frame is the size of the vertical screen image frame, and the size of the vertical screen image frame is the size of the mobile phone when the vertical screen is played, and therefore waste of display resources is avoided.
It should be noted that: in the image processing apparatus provided in the above embodiment, when processing an image, only the division of the above functional modules is illustrated, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the embodiments of the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the embodiments of the method for details, which are not described herein again.
Fig. 6 shows a block diagram of a terminal 600 according to an exemplary embodiment of the present application. The terminal may be a terminal that uploads a video or a terminal that watches a video, and the terminal 600 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the image processing methods provided by the method embodiments herein.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used to locate the current geographic location of the terminal 600 to implement navigation or LBS (location based Service). The positioning component 608 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 613 may be disposed on a side frame of the terminal 600 and/or on a lower layer of the touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, a user's holding signal of the terminal 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually decreases, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front surface of the terminal 600 gradually becomes larger, the processor 601 controls the touch display 605 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 7 is a schematic structural diagram of a server 700 according to an embodiment of the present application, where the server 700 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 701 and one or more memories 702, where at least one instruction is stored in the memory 702, and the at least one instruction is loaded and executed by the processor 701 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, comprising instructions executable by a processor in a terminal to perform the image processing method in the above-described embodiments. For example, the computer-readable storage medium may be a Read-only Memory (ROM), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A method of image processing, the method comprising:
acquiring a horizontal screen video frame of a horizontal screen video, and intercepting an area image in the horizontal screen video frame;
generating a target image with an image size of a target size based on the area image, wherein the target size is the size of a preset vertical screen image frame;
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
2. The method of claim 1, wherein the intercepting an area image in the landscape video frame comprises:
determining the height of the horizontal screen video frame as a target height;
determining a target width based on the target height, wherein the ratio of the target height to the target width is equal to the ratio of height to width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
3. The method of claim 1, wherein the intercepting an area image in the landscape video frame comprises:
determining the size of a region image corresponding to the size of the currently acquired transverse screen video frame based on the corresponding relation between the size of the pre-stored transverse screen video frame and the size of the region image, wherein the determined size of the region image comprises a target height and a target width, and the ratio of the height to the width in the size of each region image in the corresponding relation is equal to the ratio of the height to the width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
4. The method of claim 2 or 3, wherein the intercepting an area image in the landscape video frame based on the target height and the target width comprises:
and intercepting a region image at the middle position of the horizontal screen video frame based on the target height and the target width.
5. The method according to claim 2 or 3, wherein the generating of the target image with the image size of the target size based on the region image comprises:
and zooming the area image based on the ratio of the height in the preset size of the vertical screen image frame to the target height to generate a target image with the image size being the target size.
6. The method of claim 1, wherein the placing the target image on a lower layer, placing the landscape video frame on an upper layer, and aligning the landscape video frame with a vertical edge of the target image for image composition to obtain a composite vertical video frame comprises:
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, aligning the middle positions of the transverse screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
7. An apparatus for image processing, the apparatus comprising:
the device comprises an intercepting module, a display module and a display module, wherein the intercepting module is used for acquiring a transverse screen video frame of a transverse screen video and intercepting an area image in the transverse screen video frame;
the generating module is used for generating a target image with the image size being a target size based on the area image, wherein the target size is the size of a preset vertical screen image frame;
and the synthesis module is used for placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, and synthesizing the images to obtain a synthesized vertical screen video frame.
8. The apparatus of claim 7, wherein the intercept module is configured to:
determining the height of the horizontal screen video frame as a target height;
determining a target width based on the target height, wherein the ratio of the target height to the target width is equal to the ratio of height to width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
9. The apparatus of claim 7, wherein the intercept module is configured to:
determining the size of a region image corresponding to the size of the currently acquired transverse screen video frame based on the corresponding relation between the size of the pre-stored transverse screen video frame and the size of the region image, wherein the determined size of the region image comprises a target height and a target width, and the ratio of the height to the width in the size of each region image in the corresponding relation is equal to the ratio of the height to the width in the size of the preset vertical screen image frame;
and intercepting a region image in the horizontal screen video frame based on the target height and the target width.
10. The apparatus of claim 8 or 9, wherein the intercept module is configured to:
and intercepting a region image at the middle position of the horizontal screen video frame based on the target height and the target width.
11. The apparatus of claim 8 or 9, wherein the generating means is configured to:
and zooming the area image based on the ratio of the height in the preset size of the vertical screen image frame to the target height to generate a target image with the image size being the target size.
12. The apparatus of claim 7, wherein the synthesizing module is configured to:
and placing the target image on a lower layer, placing the transverse screen video frame on an upper layer, aligning the transverse screen video frame with the vertical edge of the target image, aligning the middle positions of the transverse screen video frame and the target image, and performing image synthesis to obtain a synthesized vertical screen video frame.
13. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by a method of image processing according to any one of claims 1 to 6.
14. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a method of image processing according to any one of claims 1 to 6.
CN201911216201.4A 2019-12-02 2019-12-02 Image processing method and device, computer equipment and storage medium Pending CN110839174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911216201.4A CN110839174A (en) 2019-12-02 2019-12-02 Image processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911216201.4A CN110839174A (en) 2019-12-02 2019-12-02 Image processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110839174A true CN110839174A (en) 2020-02-25

Family

ID=69578404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911216201.4A Pending CN110839174A (en) 2019-12-02 2019-12-02 Image processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110839174A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333513A (en) * 2020-07-09 2021-02-05 深圳Tcl新技术有限公司 Method and device for creating vertical screen video library and storage medium
CN113438436A (en) * 2020-03-23 2021-09-24 阿里巴巴集团控股有限公司 Video playing method, video conference method, live broadcasting method and related equipment
CN113592734A (en) * 2021-07-23 2021-11-02 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN114286136A (en) * 2021-12-28 2022-04-05 咪咕文化科技有限公司 Video playing and encoding method, device, equipment and computer readable storage medium
CN115474088A (en) * 2022-09-07 2022-12-13 腾讯音乐娱乐科技(深圳)有限公司 Video processing method, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040201549A1 (en) * 2003-03-17 2004-10-14 Sebastien Weitbruch Device and method for reducing burning effects on display means
CN106454407A (en) * 2016-10-25 2017-02-22 广州华多网络科技有限公司 Video live broadcast method and device
CN106484349A (en) * 2016-09-26 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of live information
CN107707954A (en) * 2017-10-27 2018-02-16 北京小米移动软件有限公司 Video broadcasting method and device
WO2018145545A1 (en) * 2017-02-13 2018-08-16 广州市动景计算机科技有限公司 Full-screen setting method and device for webpage video, mobile device and computer readable storage medium
CN109089157A (en) * 2018-06-15 2018-12-25 广州华多网络科技有限公司 Method of cutting out, display equipment and the device of video pictures
CN109547644A (en) * 2018-12-27 2019-03-29 维沃移动通信有限公司 A kind of information display method and terminal
CN110189378A (en) * 2019-05-23 2019-08-30 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040201549A1 (en) * 2003-03-17 2004-10-14 Sebastien Weitbruch Device and method for reducing burning effects on display means
CN106484349A (en) * 2016-09-26 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of live information
CN106454407A (en) * 2016-10-25 2017-02-22 广州华多网络科技有限公司 Video live broadcast method and device
WO2018145545A1 (en) * 2017-02-13 2018-08-16 广州市动景计算机科技有限公司 Full-screen setting method and device for webpage video, mobile device and computer readable storage medium
CN107707954A (en) * 2017-10-27 2018-02-16 北京小米移动软件有限公司 Video broadcasting method and device
CN109089157A (en) * 2018-06-15 2018-12-25 广州华多网络科技有限公司 Method of cutting out, display equipment and the device of video pictures
CN109547644A (en) * 2018-12-27 2019-03-29 维沃移动通信有限公司 A kind of information display method and terminal
CN110189378A (en) * 2019-05-23 2019-08-30 北京奇艺世纪科技有限公司 A kind of method for processing video frequency, device and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
百度知道: "竖屏视频两边的黑边怎么弄成这种效果", 《HTTPS://ZHIDAO.BAIDU.COM/QUESTION/757764693356980804.HTML》 *
知乎 老耳机: "请问竖屏变横屏,并将所产生的黑边虚化的视频效果是如何制作的?", 《HTTPS://WWW.ZHIHU.COM/QUESTION/51062628》 *
荔枝音频: "竖屏视频横屏背景模糊效果制作方法", 《HTTPS://WWW.BILIBILI.COM/VIDEO/AV71406804》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113438436A (en) * 2020-03-23 2021-09-24 阿里巴巴集团控股有限公司 Video playing method, video conference method, live broadcasting method and related equipment
CN113438436B (en) * 2020-03-23 2023-12-19 阿里巴巴集团控股有限公司 Video playing method, video conference method, live broadcast method and related equipment
CN112333513A (en) * 2020-07-09 2021-02-05 深圳Tcl新技术有限公司 Method and device for creating vertical screen video library and storage medium
CN112333513B (en) * 2020-07-09 2024-02-06 深圳Tcl新技术有限公司 Method, equipment and storage medium for creating vertical screen video library
CN113592734A (en) * 2021-07-23 2021-11-02 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN113592734B (en) * 2021-07-23 2024-01-23 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN114286136A (en) * 2021-12-28 2022-04-05 咪咕文化科技有限公司 Video playing and encoding method, device, equipment and computer readable storage medium
CN115474088A (en) * 2022-09-07 2022-12-13 腾讯音乐娱乐科技(深圳)有限公司 Video processing method, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
CN108401124B (en) Video recording method and device
CN111372126B (en) Video playing method, device and storage medium
CN108449641B (en) Method, device, computer equipment and storage medium for playing media stream
CN110278464B (en) Method and device for displaying list
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN108965922B (en) Video cover generation method and device and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109862412B (en) Method and device for video co-shooting and storage medium
CN111447389B (en) Video generation method, device, terminal and storage medium
CN110196673B (en) Picture interaction method, device, terminal and storage medium
CN110288689B (en) Method and device for rendering electronic map
CN109451248B (en) Video data processing method and device, terminal and storage medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN112565806A (en) Virtual gift presenting method, device, computer equipment and medium
CN113384880A (en) Virtual scene display method and device, computer equipment and storage medium
CN109783176B (en) Page switching method and device
CN110769120A (en) Method, device, equipment and storage medium for message reminding
CN110996115B (en) Live video playing method, device, equipment, storage medium and program product
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN111464829B (en) Method, device and equipment for switching media data and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200225

RJ01 Rejection of invention patent application after publication