WO2021218430A1 - Image processing method and apparatus, and electronic device - Google Patents

Image processing method and apparatus, and electronic device Download PDF

Info

Publication number
WO2021218430A1
WO2021218430A1 PCT/CN2021/080248 CN2021080248W WO2021218430A1 WO 2021218430 A1 WO2021218430 A1 WO 2021218430A1 CN 2021080248 W CN2021080248 W CN 2021080248W WO 2021218430 A1 WO2021218430 A1 WO 2021218430A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
layer
instruction
video stream
image
Prior art date
Application number
PCT/CN2021/080248
Other languages
French (fr)
Chinese (zh)
Inventor
徐亮
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2021218430A1 publication Critical patent/WO2021218430A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present invention relates to the technical field of image processing, in particular to an image processing method, device and electronic equipment.
  • embodiments of the present application provide an image processing method, apparatus and electronic equipment.
  • the present application provides an image processing method, which is executed by a terminal device and includes: acquiring video information, the video information includes a video stream and an instruction set, the instruction set includes the input when the user watches the video for interaction Determine the position information of the video layer and the video stream displayed on the video layer; perform image quality PQ enhancement on the video stream displayed on the video layer; PQ enhanced video image Layer and instruction layer are superimposed, and then sent to the display screen for display.
  • the instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes information determined according to the position information.
  • the first area in a transparent state.
  • the layer displayed by the video stream is first identified. After determining the position information of the video stream displayed on the layer, when the video stream is displayed on the video layer, the video stream is displayed on the video layer.
  • the video image in the stream is PQ enhanced, and then the specific position of the instruction layer is set to a transparent state according to the position information and the interactive content converted by the instruction set, and the interactive content is displayed, and then the PQ enhanced video layer and processing are performed.
  • the latter instruction layer is superimposed and displayed, thereby obtaining a video with high image quality and displaying interactive content, so that the user can see the interactive information while watching the video with high image quality.
  • the determining the video layer includes: identifying the video layer through a video layer recognition technology.
  • the method further includes: adding an identifier to the video layer, where the identifier is used to determine that the data for PQ enhancement is the video stream displayed on the video layer.
  • the determining the position information of the video stream displayed on the video layer includes: determining according to the size of the video image in the video stream and the size of the video layer The position where the video image in the video stream is displayed on the video layer; and the coordinate information or margin information of the video image in the video stream on the video layer is recorded.
  • the present application provides an image processing device, including: an acquisition unit that acquires video information, the video information includes a video stream and an instruction set, the instruction set includes instructions input when a user interacts while watching a video; a determining unit , Determine the position information of a video layer and the video stream displayed on the video layer; a processing unit, perform PQ enhancement on the video stream displayed on the video layer; and the PQ enhanced video image Layer and instruction layer are superimposed, and then sent to the display screen for display.
  • the instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes information determined according to the position information.
  • the first area in a transparent state.
  • the determining unit is specifically configured to identify the video layer through a video layer recognition technology.
  • the determining unit is further configured to add an identifier to the video layer, and the identifier is used to determine that the data for PQ enhancement is the video stream displayed on the video layer.
  • the determining unit is specifically configured to determine that the video image in the video stream is in the video image according to the size of the video image in the video stream and the size of the video layer. The position displayed on the layer; and record the coordinate information or margin information of the video image in the video stream on the video layer.
  • the present application provides an electronic device, including: a transceiver to obtain video information, the video information includes a video stream and an instruction set, the instruction set includes instructions input by a user when watching a video for interaction; a first chip To determine the position information of the video layer and the video stream displayed on the video layer; the second chip to perform PQ enhancement on the video stream displayed on the video layer; the PQ-enhanced video image Layer and instruction layer are superimposed, and then sent to the display screen for display.
  • the instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes information determined according to the position information.
  • the first area in a transparent state.
  • a first channel and a second channel are included between the first chip and the second chip, and the first channel is used to identify the video identified by the first chip.
  • the stream is sent to the second chip; the second channel is used to send the position information of the video stream of the first chip and the instruction set to the second chip.
  • the video stream and the location information and the instruction set are sent to the second chip separately, so that the second chip can process the video stream and the location information and the instruction set separately.
  • the first channel is a data channel DP
  • the second channel is a standard PCIe channel for quick peripheral component interconnection.
  • the first chip is specifically configured to recognize the video layer through a video layer recognition technology.
  • the first chip is further configured to add an identifier to the video layer, and the identifier is used to determine that the data for PQ enhancement performed by the second chip is on the video layer The displayed video stream.
  • the first chip is specifically configured to determine that the video image in the video stream is in the video according to the size of the video image in the video stream and the size of the video layer. The position displayed on the layer; and record the coordinate information or margin information of the video image in the video stream on the video layer.
  • the present application provides a computer-readable storage medium for storing instructions/executable code.
  • the instructions/executable code are executed by the processor of an electronic device, the electronic device realizes the same as the first Any possible implementation of the aspect.
  • the present application provides a computer program product containing instructions, which when the instructions run on a computer, cause the computer to execute any possible embodiment in the first aspect.
  • FIG. 1 is a schematic structural diagram of a terminal device for processing video according to an embodiment of the application
  • FIG. 2 is a schematic diagram of the internal structure of a first chip and a second chip provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of video synthesis provided by an embodiment of the application.
  • FIG. 4 is a flowchart of an image processing method provided by an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
  • the received video stream and the interactive data of the user interacting with the video are processed.
  • the integrated processing generates a video stream that not only includes the user's interactive content, and then sends it to the display screen for display, so that the user can see the user's interactive content while watching the video.
  • the existing Huawei Hi3751 V811 chip has a picture quality (PQ) enhancement feature, which can enhance the image display effect during video playback.
  • PQ picture quality
  • the Hi3751 V811 chip directly performs PQ enhancement on the new Hecheng video image, it will cause the barrage, likes and other information in the video image to be unclear on the screen (for example, the edge of the text is jagged due to the PQ enhancement Status, etc.).
  • terminal devices include but are not limited to smart TVs, tablet computers, smart phones, notebook computers, etc., for example, can also include terminal devices and automated control devices independently developed for various specific business scenarios.
  • an embodiment of the present application provides a terminal device. After acquiring the video stream and the interactive instruction set, the video layer where the video stream is Perform PQ enhancement, then convert the interactive content according to the instruction set and present it on the instruction layer, and set the corresponding area of the instruction layer to a transparent state according to the position information of the video layer, and finally add the video layer and the PQ enhanced video layer The processed instruction layers are superimposed to obtain a video with high image quality and interactive content.
  • the smart TV 100 includes a first chip 101, a second chip 102 and a display screen 103.
  • the first chip 101 is used for processing the received data
  • the second chip 102 is used for synthesizing the processed data.
  • the smart TV 100 also includes a first channel and a second channel.
  • the first channel and the second channel may be two physical connections, which are connected between the first chip 101 and the second chip 102, and are respectively used to send data processed by the first chip 101 to the second chip 102.
  • the first chip 101 is divided into a layer recognition module 1011, a video layer processing module 1012, and a 2D instruction stream module 1013 according to the functions performed by the first chip 101.
  • the layer recognition module 1011 recognizes the layer displayed by the video stream through the video layer recognition technology, and then displays the video stream on the layer after the smart TV 100 downloads the video stream, instruction set and other data from the cloud through the communication unit .
  • the video image displayed on the display screen 103 is formed by superimposing content images displayed on layers including a video layer, an instruction layer, and the like.
  • the video layer is to display video images in the video stream
  • the instruction layer is to display virtual buttons for controlling the playback of the video stream, virtual buttons for playing the video stream, content images corresponding to instructions input when the user interacts, and so on.
  • the instruction set mainly includes the instructions input by the user when interacting according to watching the video, such as inputting text, clicking the "Like” virtual button, clicking the "Send a gift” virtual button, and so on.
  • the smart TV 100 converts each instruction to display images on the instruction layer, such as text, "thumbs up” corresponding to likes, and "rocket” corresponding to gifts.
  • the first chip 101 After the first chip 101 receives the data downloaded from the cloud, it uses the video layer recognition technology to identify the video layer, so that the video stream data displayed on the video layer can be subsequently sent to the second chip 102 through a separate channel. ; After the video layer is identified, the video layer is marked with a mark for subsequent transmission to the second chip 102. The second chip 102 recognizes the video layer according to the mark, and then compares the video displayed on the video layer The image is PQ enhanced.
  • the video layer processing module 1012 is configured to send the video stream to the second chip 102 through the first channel, and at the same time calculate the position information of the video layer, and send it to the 2D instruction stream module 1013.
  • the size of the video image in the video stream is not exactly the same as the size of the smart TV 100 display 103, so usually when we watch a video, the upper and lower areas of the video image displayed on the display 103 have " "Black” part, or the left and right areas of the video image have “black” parts. Therefore, it is necessary to record the display position of the video image on the display screen 103, so as to ensure that the display position of the video stream will not be obscured by the instruction layer when it is superimposed with the instruction layer.
  • the video layer processing module 1012 sets the video image at the set position of the video layer according to the set requirements, or Set the video image in the center of the video layer according to the principle of "centering", or because the video image is too large or too small, the video image in the video stream is enlarged or reduced, so that the left and right lengths of the video image are the same as the video layer. After the left and right lengths are the same, or the vertical width of the video image is the same as the vertical width of the video layer, set it in the setting position of the video layer. After determining the display position of the video image on the video layer, record the position of the video image on the video layer.
  • the position information can be coordinate information. For example, take the lower left corner of the video layer as the left origin to record the video image (the existing video images are all rectangular in shape. For those made on our Douyin, Meitu, etc. apps, such as " The “heart” and “star”-shaped videos are all adding a “heart”-shaped frame to the original video image to obtain the “heart”-shaped video) the coordinates of the four vertices on the video layer; it can also be distance information, for example, the video The distance between the left border of the image and the left border of the video layer, the distance between the right border of the video image and the right border of the video layer, the distance between the top border of the video image and the top border of the video layer, And the distance between the bottom border of the video image and the bottom border of the video layer.
  • the 2D instruction stream module 1013 is configured to send the instruction set and the position information of the video layer to the second chip 102 through the second channel.
  • the video layer processing module 1012 and the second chip 102 establish a first channel
  • the 2D instruction stream module 1013 and the second chip 102 establish a second channel.
  • the first chip 101 receives data such as a video stream, instruction set, etc.
  • the video stream is recognized by the layer recognition module 1011, and the video stream is sent to the second chip 102 through the first channel established by the video layer processing module 1012
  • the instruction set and the position information of the video layer calculated in the video layer processing module 1012 are sent to the second chip 102 through the second channel established by the 2D instruction stream module 1013.
  • the position information of the video stream and the instruction set and the video layer are sent to different modules in the second chip 102 through two different channels for processing, so as to avoid the second chip from performing PQ enhancement on all data.
  • the first channel uses a data path (DP), and the second channel uses a peripheral component interconnect express (PCIe) channel.
  • DP data path
  • PCIe peripheral component interconnect express
  • the second chip 102 is divided into a 2D instruction flow module 1021 and a PQ enhancement module 1022.
  • the 2D instruction stream module 1021 establishes a connection with the 2D instruction stream module 1013 through the second channel, and is used to receive the position information and instruction set of the video layer, and superimpose the instruction layer with the video layer according to the position information of the video layer
  • the overlapping position of is set to a transparent state, and then each instruction in the instruction set is converted into text, thumbs, rockets and other images input by the user during interaction, and displayed on the instruction layer.
  • the position information of the video layer is received, and the video layer is determined according to the coordinates or margins in the position information of the video layer.
  • the video image display area in the video stream when superimposed with the instruction layer and then set this area to a transparent state, so that when the subsequent video layer and instruction layer are superimposed and displayed, the video stream displayed by the video layer will not be obscured by the instruction layer ,
  • the display effect is shown in Figure 3(a).
  • the interactive image can be displayed in any area of the instruction layer, and can also be displayed in the overlapping area when the video layer and the instruction layer are superimposed, which is not limited in this application.
  • the PQ enhancement module 1022 establishes a connection with the video layer processing module 1012 through the first channel, and is used to identify the video layer by identifying the identifier on the video layer, and then perform PQ on the video image in the video stream displayed on the video layer Enhance, display the enhanced video image on the video layer, as shown in Figure 3(b).
  • the second chip 102 will use the PQ enhancement module 1022 to perform the PQ enhancement of the video layer and the 2D instruction stream module 1021 to process the instruction layer after the instruction layer is superimposed, and then send it to the display screen 103 for display .
  • the terminal device after the terminal device obtains the video stream, instruction set and other data, it recognizes the video layer of the video stream through the first chip, then sends it to the second chip through the first channel, and calculates the video at the same time.
  • the position information of the image displayed on the video layer, and then the position information and instruction set are sent to the second chip through the second channel.
  • the second chip After receiving the video stream, the second chip performs PQ enhancement on the video image displayed on the video layer.
  • the hardware requirements implemented by the foregoing embodiments of the present application are not limited to two chips, but can be any number of chips.
  • the above is implemented by two chips, it is known to those skilled in the art that the chip is generally composed of multiple modules, but if all the modules in the first chip 101 and the second chip 102 are packaged together, the above implementation can be considered The example is realized by a chip.
  • FIG. 4 is a flowchart of an image processing method provided by an embodiment of the application. As shown in Figure 4, the image processing method provided by this application is executed by a terminal device, and the specific execution steps are as follows:
  • Step S401 Obtain video information.
  • the acquired video information includes a video stream and an instruction set.
  • the video stream is the video watched by the user.
  • the instruction set mainly includes the instructions that the user enters when interacting with the video, such as inputting text, clicking the "Like” virtual button, and clicking the "Send a gift” virtual button.
  • the smart TV 100 converts each instruction to display images on the instruction layer, such as text, "thumbs up” corresponding to likes, and "rocket" corresponding to gifts.
  • Step S403 Determine the position information of the video layer and the video stream on the video layer.
  • the video image displayed on the display screen is formed by superimposing content images displayed by layers including video layers, instruction layers, etc., and the size and dimensions of each layer are exactly the same as those of the display screen.
  • the video layer is to display video images in the video stream
  • the instruction layer is to display virtual buttons for controlling the playback of the video stream, virtual buttons for playing the video stream, content images corresponding to instructions input when the user interacts, and so on.
  • the video layer is identified through the video layer recognition technology, so that the video stream data displayed on the video layer can be subsequently transmitted through a separate channel; after the video layer is identified, the video image The layer is marked with a mark for subsequent PQ enhancement of the video stream displayed on the video layer by identifying the video layer.
  • the size of the video image in the video stream is usually not exactly the same as the size of the display screen, so usually when we watch a video, the upper and lower areas of the video image displayed on the display screen have "black” parts, or the video There are “black” parts in the left and right areas of the image. Therefore, it is necessary to record the display position of the video image on the display screen, so that when it is superimposed with the instruction layer later, it is determined that the display position of the video stream will not be obscured by the instruction layer.
  • the terminal device when the size of the video image in the video stream and the display screen are not the same, sets the video image in the set position of the video layer according to the set requirements, or sets the video image in accordance with " The principle of "centering" is set at the center of the video layer, or because the video image is too large or too small, the video image in the video stream is enlarged or reduced so that the left and right lengths of the video image are the same as the left and right lengths of the video layer. Or make the up and down width of the video image the same as the up and down width of the video layer, and set it at the setting position of the video layer. After determining the display position of the video image on the video layer, record the position of the video image on the video layer.
  • the position information can be coordinate information, for example, the coordinates of the four vertices of the video image on the video layer are recorded with the lower left corner of the video layer as the left origin; it can also be distance information, for example, the left border of the video image The distance from the left border of the video layer, the distance between the right border of the video image and the right border of the video layer, the distance between the top border of the video image and the top border of the video layer, and the distance between the top border of the video image and the top border of the video layer. The distance between the bottom border and the bottom border of the video layer.
  • Step S405 Perform PQ enhancement on the video stream displayed on the video layer.
  • step S407 the PQ enhanced video layer and the instruction layer are superimposed, and then sent to the display screen for display.
  • the video image is determined according to the coordinates or margins in the position information of the video layer.
  • each instruction in the instruction set is converted into the text, thumbs up, rocket and other images input by the user during interaction, and displayed on the instruction layer.
  • the terminal device recognizes the video layer by recognizing the mark on the video layer, performs PQ enhancement on the video image in the video stream displayed on the video layer, and then displays the enhanced video image on the video layer.
  • the terminal device displays the video on the display screen
  • the video layer enhanced by PQ and the processed instruction layer are superimposed together and sent to the display screen for display.
  • the image processing method provided in the embodiment of the application first identifies the layer displayed by the video stream after obtaining the video stream and the instruction set.
  • FIG. 5 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
  • the image processing apparatus 500 provided by the embodiment of the present application includes: an acquisition unit 501, a determination unit 503, and a processing unit 505.
  • the obtaining unit 501 is used to obtain video information.
  • the video information includes a video stream and an instruction set.
  • the instruction set includes instructions input when the user is watching the video for interaction; the determining unit 503 is used to determine that the video layer and the video stream are on the video layer.
  • the processing unit 505 is used to perform PQ enhancement on the video stream displayed on the video layer; and superimpose the PQ enhanced video layer and instruction layer, and then send it to the display screen for display, the instruction layer It is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes a first area in a transparent state determined according to position information.
  • the specific implementation scheme of each unit refer to the introduction of the previous embodiment.
  • various aspects or features of the embodiments of the present application can be implemented as methods, devices, or products using standard programming and/or engineering techniques.
  • article of manufacture used in this application encompasses a computer program accessible from any computer-readable device, carrier, or medium.
  • computer-readable media may include, but are not limited to: magnetic storage devices (for example, hard disks, floppy disks, or tapes, etc.), optical disks (for example, compact discs (CD), digital versatile discs (DVD)) Etc.), smart cards and flash memory devices (for example, erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.).
  • various storage media described herein may represent one or more devices and/or other machine-readable media for storing information.
  • the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • the image processing apparatus 500 in FIG. 5 may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution.
  • the execution order of the processes should be determined by their functions and internal logic, and should not be dealt with.
  • the implementation process of the embodiments of the present application constitutes any limitation.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art or the part of the technical solutions can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • Including several instructions to make a computer device (which may be a personal computer, a server, or an access network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present application relates to the technical field of image processing. Provided are an image processing method and apparatus, and an electronic device. The image processing method comprises: acquiring video information; determining a video image layer, and information of the position where a video stream is displayed on the video image layer; performing PQ enhancement on the video stream displayed on the video image layer; and superimposing the video image layer that has been subjected to PQ enhancement, and an instruction image layer, and then sending the superimposed video image layer and instruction image layer to a display screen for displaying. In the present application, after a video stream and an instruction set are obtained and information of the position where the video stream is displayed on an image layer is determined, PQ enhancement is performed on the video stream; according to the information of the position, and interaction content converted from the instruction set, the specific position of an instruction image layer is configured to be in a transparent state, and the interaction content is displayed; and a video image layer that has been subjected to PQ enhancement, and a processed instruction image layer are then superimposed and displayed. Therefore, a video which has high image quality and displays interaction content is obtained, such that a user can also see interaction information while watching the video with high image quality.

Description

图像处理方法、装置和电子设备Image processing method, device and electronic equipment
本申请要求于2020年04月26日提交中国国家知识产权局、申请号为202010338390.9、申请名称为“图像处理方法、装置和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office of China, the application number is 202010338390.9, and the application name is "Image Processing Method, Apparatus and Electronic Equipment" on April 26, 2020, the entire content of which is incorporated by reference In this application.
技术领域Technical field
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法、装置和电子设备。The present invention relates to the technical field of image processing, in particular to an image processing method, device and electronic equipment.
背景技术Background technique
目前市面上最热门的视频应用程序(application,APP)如斗鱼、抖音、爱奇艺等,都具有大量的交互功能(例如点赞、评论等等)。随着5G和4K技术的推广,视频APP在电视上将会提供更好的体验,但是目前市面上,智能电视上的视频APP的主要功能还是基本的视频播放,而交互功能并没有在电视上有很好的应用,这大大降低了用户的体验。At present, the most popular video applications (applications, APPs) on the market, such as Douyu, Douyin, and iQiyi, all have a large number of interactive functions (such as likes, comments, etc.). With the promotion of 5G and 4K technologies, video apps will provide a better experience on TVs, but currently on the market, the main function of video apps on smart TVs is still basic video playback, and interactive functions are not available on TVs. There are very good applications, which greatly reduces the user experience.
发明内容Summary of the invention
为了克服上述不能在电子设备上显示交互信息的问题,本申请的实施例提供了一种图像处理方法、装置和电子设备。In order to overcome the above-mentioned problem that interactive information cannot be displayed on electronic equipment, embodiments of the present application provide an image processing method, apparatus and electronic equipment.
为了达到上述目的,本申请的实施例采用如下技术方案:In order to achieve the foregoing objectives, the embodiments of the present application adopt the following technical solutions:
第一方面,本申请提供一种图像处理方法,所述方法由终端设备执行,包括:获取视频信息,所述视频信息包括视频流和指令集,所述指令集包括用户观看视频进行交互时输入的指令;确定视频图层和所述视频流在所述视频图层上显示的位置信息;对所述视频图层上显示的所述视频流进行图像质量PQ增强;将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,所述指令图层用于显示由所述指令集中的各个指令转换的互动内容,所述指令图层包括根据所述位置信息确定的呈现透明状态的第一区域。In a first aspect, the present application provides an image processing method, which is executed by a terminal device and includes: acquiring video information, the video information includes a video stream and an instruction set, the instruction set includes the input when the user watches the video for interaction Determine the position information of the video layer and the video stream displayed on the video layer; perform image quality PQ enhancement on the video stream displayed on the video layer; PQ enhanced video image Layer and instruction layer are superimposed, and then sent to the display screen for display. The instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes information determined according to the position information. The first area in a transparent state.
本发明中,在得到视频流和指令集后,先识别出视频流显示的图层,在确定视频流在该图层显示的位置信息后,当视频流在视频图层显示时,对该视频流中的视频图像进行PQ增强,然后根据位置信息和指令集转换的互动内容,将指令图层的特定位置设置为透明状态,并显示互动内容,然后将进行PQ增强后的视频图层和处理后的指令图层进行叠加显示,从而得到高图像质量和显示互动内容的视频,实现用户在观看高图像质量的视频同时,还可以看到交互信息。In the present invention, after the video stream and the instruction set are obtained, the layer displayed by the video stream is first identified. After determining the position information of the video stream displayed on the layer, when the video stream is displayed on the video layer, the video stream is displayed on the video layer. The video image in the stream is PQ enhanced, and then the specific position of the instruction layer is set to a transparent state according to the position information and the interactive content converted by the instruction set, and the interactive content is displayed, and then the PQ enhanced video layer and processing are performed The latter instruction layer is superimposed and displayed, thereby obtaining a video with high image quality and displaying interactive content, so that the user can see the interactive information while watching the video with high image quality.
在另一个可能的实现中,所述确定视频图层,包括:通过视频图层识别技术识别出所述视频图层。In another possible implementation, the determining the video layer includes: identifying the video layer through a video layer recognition technology.
在另一个可能的实现中,所述方法还包括:对所述视频图层上增加标识,所述标识用于确定进行PQ增强的数据为所述视频图层上显示的视频流。In another possible implementation, the method further includes: adding an identifier to the video layer, where the identifier is used to determine that the data for PQ enhancement is the video stream displayed on the video layer.
在另一个可能的实现中,所述确定所述视频流在所述视频图层上显示的位置信息,包括:根据所述视频流中的视频图像的尺寸和所述视频图层的尺寸,确定所述视频流中的视频图像在所述视频图层上显示的位置;记录所述视频流中的视频图像在所述视频图层上的 坐标信息或边距信息。In another possible implementation, the determining the position information of the video stream displayed on the video layer includes: determining according to the size of the video image in the video stream and the size of the video layer The position where the video image in the video stream is displayed on the video layer; and the coordinate information or margin information of the video image in the video stream on the video layer is recorded.
第二方面,本申请提供一种图像处理装置,包括:获取单元,获取视频信息,所述视频信息包括视频流和指令集,所述指令集包括用户观看视频进行交互时输入的指令;确定单元,确定视频图层和所述视频流在所述视频图层上显示的位置信息;处理单元,对所述视频图层上显示的所述视频流进行PQ增强;以及将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,所述指令图层用于显示由所述指令集中的各个指令转换的互动内容,所述指令图层包括根据所述位置信息确定的呈现透明状态的第一区域。In a second aspect, the present application provides an image processing device, including: an acquisition unit that acquires video information, the video information includes a video stream and an instruction set, the instruction set includes instructions input when a user interacts while watching a video; a determining unit , Determine the position information of a video layer and the video stream displayed on the video layer; a processing unit, perform PQ enhancement on the video stream displayed on the video layer; and the PQ enhanced video image Layer and instruction layer are superimposed, and then sent to the display screen for display. The instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes information determined according to the position information. The first area in a transparent state.
在另一个可能的实现中,所述确定单元具体用于,通过视频图层识别技术识别出所述视频图层。In another possible implementation, the determining unit is specifically configured to identify the video layer through a video layer recognition technology.
在另一个可能的实现中,所述确定单元,还用于对所述视频图层上增加标识,所述标识用于确定进行PQ增强的数据为所述视频图层上显示的视频流。In another possible implementation, the determining unit is further configured to add an identifier to the video layer, and the identifier is used to determine that the data for PQ enhancement is the video stream displayed on the video layer.
在另一个可能的实现中,所述确定单元具体用于,根据所述视频流中的视频图像的尺寸和所述视频图层的尺寸,确定所述视频流中的视频图像在所述视频图层上显示的位置;记录所述视频流中的视频图像在所述视频图层上的坐标信息或边距信息。In another possible implementation, the determining unit is specifically configured to determine that the video image in the video stream is in the video image according to the size of the video image in the video stream and the size of the video layer. The position displayed on the layer; and record the coordinate information or margin information of the video image in the video stream on the video layer.
第三方面,本申请提供一种电子设备,包括:收发器,获取视频信息,所述视频信息包括视频流和指令集,所述指令集包括用户观看视频进行交互时输入的指令;第一芯片,确定视频图层和所述视频流在所述视频图层上显示的位置信息;第二芯片,对所述视频图层上显示的所述视频流进行PQ增强;将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,所述指令图层用于显示由所述指令集中的各个指令转换的互动内容,所述指令图层包括根据所述位置信息确定的呈现透明状态的第一区域。In a third aspect, the present application provides an electronic device, including: a transceiver to obtain video information, the video information includes a video stream and an instruction set, the instruction set includes instructions input by a user when watching a video for interaction; a first chip To determine the position information of the video layer and the video stream displayed on the video layer; the second chip to perform PQ enhancement on the video stream displayed on the video layer; the PQ-enhanced video image Layer and instruction layer are superimposed, and then sent to the display screen for display. The instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes information determined according to the position information. The first area in a transparent state.
在另一个可能的实现中,所述第一芯片和所述第二芯片之间包括第一通道和第二通道,所述第一通道,用于将所述第一芯片识别出的所述视频流发送给所述第二芯片;所述第二通道,用于将所述第一芯片的所述视频流的位置信息和所述指令集发送给所述第二芯片。In another possible implementation, a first channel and a second channel are included between the first chip and the second chip, and the first channel is used to identify the video identified by the first chip. The stream is sent to the second chip; the second channel is used to send the position information of the video stream of the first chip and the instruction set to the second chip.
本发明中,通过将视频流与位置信息和指令集分开发送至第二芯片,从而实现让第二芯片对视频流与位置信息和指令集分开处理。In the present invention, the video stream and the location information and the instruction set are sent to the second chip separately, so that the second chip can process the video stream and the location information and the instruction set separately.
在另一个可能的实现中,所述第一通道为数据通道DP,所述第二通道为快捷外围部件互连标准PCIe通道。In another possible implementation, the first channel is a data channel DP, and the second channel is a standard PCIe channel for quick peripheral component interconnection.
在另一个可能的实现中,所述第一芯片具体用于,通过视频图层识别技术识别出所述视频图层。In another possible implementation, the first chip is specifically configured to recognize the video layer through a video layer recognition technology.
在另一个可能的实现中,所述第一芯片,还用于对所述视频图层上增加标识,所述标识用于确定所述第二芯片进行PQ增强的数据为所述视频图层上显示的视频流。In another possible implementation, the first chip is further configured to add an identifier to the video layer, and the identifier is used to determine that the data for PQ enhancement performed by the second chip is on the video layer The displayed video stream.
在另一个可能的实现中,所述第一芯片具体用于,根据所述视频流中的视频图像的尺寸和所述视频图层的尺寸,确定所述视频流中的视频图像在所述视频图层上显示的位置;记录所述视频流中的视频图像在所述视频图层上的坐标信息或边距信息。In another possible implementation, the first chip is specifically configured to determine that the video image in the video stream is in the video according to the size of the video image in the video stream and the size of the video layer. The position displayed on the layer; and record the coordinate information or margin information of the video image in the video stream on the video layer.
第四方面,本申请提供一种计算机可读存储介质,用于存储指令/可执行代码,当所述指令/可执行代码被电子设备的处理器执行时,使得所述电子设备实现如第一方面中任一可能实现的实施例。In a fourth aspect, the present application provides a computer-readable storage medium for storing instructions/executable code. When the instructions/executable code are executed by the processor of an electronic device, the electronic device realizes the same as the first Any possible implementation of the aspect.
第五方面,本申请提供一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如第一方面中任一可能实现的实施例。In the fifth aspect, the present application provides a computer program product containing instructions, which when the instructions run on a computer, cause the computer to execute any possible embodiment in the first aspect.
附图说明Description of the drawings
下面对实施例或现有技术描述中所需使用的附图作简单地介绍。The following briefly introduces the drawings needed in the description of the embodiments or the prior art.
图1为本申请实施例提供的一种终端设备对视频进行处理的架构示意图;FIG. 1 is a schematic structural diagram of a terminal device for processing video according to an embodiment of the application;
图2为本申请实施例提供的第一芯片和第二芯片内部结构示意图;2 is a schematic diagram of the internal structure of a first chip and a second chip provided by an embodiment of the application;
图3为本申请实施例提供的视频合成示意图;FIG. 3 is a schematic diagram of video synthesis provided by an embodiment of the application;
图4为本申请实施例提供的一种图像处理方法的流程图;FIG. 4 is a flowchart of an image processing method provided by an embodiment of the application;
图5为本申请实施例提供的一种图像处理装置结构示意图。FIG. 5 is a schematic structural diagram of an image processing device provided by an embodiment of the application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below in conjunction with the drawings in the embodiments of the present application.
为了解决用户观看电视过程中,不能观看交互信息,现有技术中,如华为现有的智慧电视,将接收到的视频流和用户跟视频进行互动的互动数据后,对视频流和互动数据进行整合处理,生成既包括用户互动的内容的视频流,然后送到显示屏上进行显示,以此实现用户在观看视频的过程中,又能看到用户进行互动的内容。In order to solve the problem that users cannot watch interactive information while watching TV, in the prior art, such as Huawei’s existing smart TVs, the received video stream and the interactive data of the user interacting with the video are processed. The integrated processing generates a video stream that not only includes the user's interactive content, and then sends it to the display screen for display, so that the user can see the user's interactive content while watching the video.
随着科技的发展和用户对视频的图像质量的追求,高图像质量的视频越来越重要。现有的华为Hi3751 V811芯片,具有图像质量(picture quality,PQ)增强特性,可以在视频播放的时候,增强图像显示效果。但是,如果Hi3751 V811芯片直接对新和成的视频图像进行PQ增强,则会导致视频图像中的弹幕、点赞等信息在屏幕上显示不清楚(例如,文字的边缘因PQ增强导致出现锯齿状等等)。With the development of technology and users' pursuit of video image quality, high-quality video is becoming more and more important. The existing Huawei Hi3751 V811 chip has a picture quality (PQ) enhancement feature, which can enhance the image display effect during video playback. However, if the Hi3751 V811 chip directly performs PQ enhancement on the new Hecheng video image, it will cause the barrage, likes and other information in the video image to be unclear on the screen (for example, the edge of the text is jagged due to the PQ enhancement Status, etc.).
因此,现有方案中,用户在观看视频时,如果想实现交互功能,则不能观看高图像质量的视频;如果想观看高图像质量的视频,则不能实现交互功能。这种“鱼和熊掌不可兼得”的情况,对于现在日益挑剔的用户来说,显然是不能接受的。Therefore, in the existing solution, when a user wants to realize an interactive function when watching a video, he cannot watch a high-image quality video; if he wants to watch a high-image quality video, he cannot realize an interactive function. This "cannot have both fish and bear's paws" situation is obviously unacceptable to increasingly picky users.
本申请实施例中,终端设备包括但不限于智能电视、平板电脑、智能手机、笔记本电脑等等,比如还可以包括针对各种特定的业务场景独立开发的终端设备及自动化控制设备。In the embodiments of the present application, terminal devices include but are not limited to smart TVs, tablet computers, smart phones, notebook computers, etc., for example, can also include terminal devices and automated control devices independently developed for various specific business scenarios.
为了解决上述不能同时实现交互功能和观看高图像质量视频的情况,本申请实施例提供了一种终端设备,在获取到视频流和进行互动的指令集后,将视频流所处的视频图层进行PQ增强,然后根据指令集转换为交互内容呈现在指令图层,并根据视频图层的位置信息,将指令图层的对应的区域设置为透明状态,最后将通过PQ增强的视频图层和处理后的指令图层进行叠加,得到高图像质量和显示交互内容的视频。In order to solve the above-mentioned situation that the interactive function and watching high-quality video cannot be realized at the same time, an embodiment of the present application provides a terminal device. After acquiring the video stream and the interactive instruction set, the video layer where the video stream is Perform PQ enhancement, then convert the interactive content according to the instruction set and present it on the instruction layer, and set the corresponding area of the instruction layer to a transparent state according to the position information of the video layer, and finally add the video layer and the PQ enhanced video layer The processed instruction layers are superimposed to obtain a video with high image quality and interactive content.
下面以终端设备为智能电视为例,如图1所示,智能电视100包括第一芯片101、第二芯片102和显示屏103。其中,第一芯片101用于对接收到数据进行处理,第二芯片102用于对处理后的数据进行合成。In the following, taking a smart TV as the terminal device as an example, as shown in FIG. 1, the smart TV 100 includes a first chip 101, a second chip 102 and a display screen 103. Among them, the first chip 101 is used for processing the received data, and the second chip 102 is used for synthesizing the processed data.
另外,智能电视100还包括第一通道和第二通道。第一通道和第二通道可以为两条物理连线,连接在第一芯片101与第二芯片102之间,分别用于将第一芯片101处理后的数据发送给第二芯片102。In addition, the smart TV 100 also includes a first channel and a second channel. The first channel and the second channel may be two physical connections, which are connected between the first chip 101 and the second chip 102, and are respectively used to send data processed by the first chip 101 to the second chip 102.
示例性地,如图2所示,根据第一芯片101执行的功能,将第一芯片101划分为图层识别模块1011、视频图层处理模块1012和2D指令流模块1013。Exemplarily, as shown in FIG. 2, the first chip 101 is divided into a layer recognition module 1011, a video layer processing module 1012, and a 2D instruction stream module 1013 according to the functions performed by the first chip 101.
图层识别模块1011通过视频图层识别技术,识别出视频流显示的图层,然后在智能电视100通过通信单元从云端下载视频流、指令集等数据后,将视频流显示在该图层上。The layer recognition module 1011 recognizes the layer displayed by the video stream through the video layer recognition technology, and then displays the video stream on the layer after the smart TV 100 downloads the video stream, instruction set and other data from the cloud through the communication unit .
通常,显示屏103上显示的视频图像是由包括视频图层、指令图层等图层显示的内容图像叠加形成的。其中,视频图层是显示视频流中视频图像,指令图层是显示用于控制视频流播放的虚拟按键、播放视频流快慢的虚拟按键、用户进行交互时输入的指令对应的内容图像等等。Generally, the video image displayed on the display screen 103 is formed by superimposing content images displayed on layers including a video layer, an instruction layer, and the like. Among them, the video layer is to display video images in the video stream, and the instruction layer is to display virtual buttons for controlling the playback of the video stream, virtual buttons for playing the video stream, content images corresponding to instructions input when the user interacts, and so on.
本申请实施例中,指令集主要包括用户根据观看视频进行互动时输入的指令,如输入文字、点击“点赞”虚拟按键、点击“送礼物”虚拟按键等操作。智能电视100在接收到该指令集后,通过对各个指令进行转换,在指令图层显示文字、点赞对应的“竖大拇指”、送礼物对应的“火箭”等图像。In the embodiment of the present application, the instruction set mainly includes the instructions input by the user when interacting according to watching the video, such as inputting text, clicking the "Like" virtual button, clicking the "Send a gift" virtual button, and so on. After receiving the instruction set, the smart TV 100 converts each instruction to display images on the instruction layer, such as text, "thumbs up" corresponding to likes, and "rocket" corresponding to gifts.
第一芯片101在接收到云端下载的数据后,利用视频图层识别技术通过识别出视频图层后,以便后续将视频图层上显示的视频流数据通过单独的一条通路发送给第二芯片102;在识别出视频图层后,对视频图层打上标识,是为了后续发送给第二芯片102后,第二芯片102根据该标识,识别出视频图层,然后对视频图层上呈现的视频图像进行PQ增强。After the first chip 101 receives the data downloaded from the cloud, it uses the video layer recognition technology to identify the video layer, so that the video stream data displayed on the video layer can be subsequently sent to the second chip 102 through a separate channel. ; After the video layer is identified, the video layer is marked with a mark for subsequent transmission to the second chip 102. The second chip 102 recognizes the video layer according to the mark, and then compares the video displayed on the video layer The image is PQ enhanced.
视频图层处理模块1012用于将视频流通过第一通道发送给第二芯片102,同时计算出视频图层的位置信息,并发送给2D指令流模块1013。The video layer processing module 1012 is configured to send the video stream to the second chip 102 through the first channel, and at the same time calculate the position information of the video layer, and send it to the 2D instruction stream module 1013.
一般而言,视频流中的视频图像的尺寸和智能电视100显示屏103的尺寸是不完全相同的,所以通常我们观看视频时,在显示屏103上显示的视频图像的上下两个区域有“黑色”部分,或视频图像的左右两个区域有“黑色”部分。因此,需要记录视频图像在显示屏103显示的位置,以便后续在与指令图层叠加时,确定视频流显示的位置不会被指令图层遮挡。Generally speaking, the size of the video image in the video stream is not exactly the same as the size of the smart TV 100 display 103, so usually when we watch a video, the upper and lower areas of the video image displayed on the display 103 have " "Black" part, or the left and right areas of the video image have "black" parts. Therefore, it is necessary to record the display position of the video image on the display screen 103, so as to ensure that the display position of the video stream will not be obscured by the instruction layer when it is superimposed with the instruction layer.
一种可能实施例中,当视频流中的视频图像和显示屏103的尺寸不相同时,视频图层处理模块1012按照设定的要求将视频图像设置在视频图层的设定位置上,或将视频图像按照“居中”原则设置在视频图层的中心位置,或因视频图像过大或过小,对视频流中的视频图像进行放大或缩小处理,使得视频图像的左右长度与视频图层的左右长度相同、或让视频图像的上下宽度与视频图层的上下宽度相同后,设置在视频图层的设定位置上。在确定视频图像在视频图层显示的位置后,记录下视频图像在视频图层上的位置。In a possible embodiment, when the size of the video image in the video stream and the display screen 103 are not the same, the video layer processing module 1012 sets the video image at the set position of the video layer according to the set requirements, or Set the video image in the center of the video layer according to the principle of "centering", or because the video image is too large or too small, the video image in the video stream is enlarged or reduced, so that the left and right lengths of the video image are the same as the video layer. After the left and right lengths are the same, or the vertical width of the video image is the same as the vertical width of the video layer, set it in the setting position of the video layer. After determining the display position of the video image on the video layer, record the position of the video image on the video layer.
其中,位置信息可以为坐标信息,例如,以视频图层的左下角为左边原点,记录视频图像(现有视频图像的形状均为长方形,对于我们抖音、美图等APP上制作的如“心”型、“星”型的视频都是在原视频图像上添加“心”型框得到“心”型视频)在视频图层上的四个顶点的坐标;也可以为距离信息,例如,视频图像的左边框与视频图层的左边框之间的距离、视频图像的右边框与视频图层的右边框之间的距离、视频图像的上边框与视频图层的上边框之间的距离,以及视频图像的下边框与视频图层的下边框之间的距离。Among them, the position information can be coordinate information. For example, take the lower left corner of the video layer as the left origin to record the video image (the existing video images are all rectangular in shape. For those made on our Douyin, Meitu, etc. apps, such as " The “heart” and “star”-shaped videos are all adding a “heart”-shaped frame to the original video image to obtain the “heart”-shaped video) the coordinates of the four vertices on the video layer; it can also be distance information, for example, the video The distance between the left border of the image and the left border of the video layer, the distance between the right border of the video image and the right border of the video layer, the distance between the top border of the video image and the top border of the video layer, And the distance between the bottom border of the video image and the bottom border of the video layer.
2D指令流模块1013用于将指令集和视频图层的位置信息通过第二通道发送给第二芯片102。The 2D instruction stream module 1013 is configured to send the instruction set and the position information of the video layer to the second chip 102 through the second channel.
本申请实施例中,视频图层处理模块1012与第二芯片102建立第一通道,2D指令流模块1013与第二芯片102建立第二通道。当第一芯片101接收到视频流、指令集等数据后,通过图层识别模块1011识别出视频流后,将视频流通过视频图层处理模块1012建立的第一通道发送给第二芯片102,将指令集和在视频图层处理模块1012中计算出的视频图层的位置信息通过2D指令流模块1013建立的第二通道发送给第二芯片102。本申请将视频流与指令 集和视频图层的位置信息通过两条不同的通道分别发送给第二芯片102中不同模块进行处理,避免第二芯片对所有数据进行PQ增强。In the embodiment of the present application, the video layer processing module 1012 and the second chip 102 establish a first channel, and the 2D instruction stream module 1013 and the second chip 102 establish a second channel. After the first chip 101 receives data such as a video stream, instruction set, etc., the video stream is recognized by the layer recognition module 1011, and the video stream is sent to the second chip 102 through the first channel established by the video layer processing module 1012, The instruction set and the position information of the video layer calculated in the video layer processing module 1012 are sent to the second chip 102 through the second channel established by the 2D instruction stream module 1013. In this application, the position information of the video stream and the instruction set and the video layer are sent to different modules in the second chip 102 through two different channels for processing, so as to avoid the second chip from performing PQ enhancement on all data.
一种可能实施例中,第一通道采用的是数据通道(data path,DP),第二通道采用的是快捷外围部件互连标准(peripheral component interconnect express,PCIe)通道。In a possible embodiment, the first channel uses a data path (DP), and the second channel uses a peripheral component interconnect express (PCIe) channel.
根据第二芯片102执行的功能,将第二芯片102划分为2D指令流模块1021和PQ增强模块1022。According to the functions performed by the second chip 102, the second chip 102 is divided into a 2D instruction flow module 1021 and a PQ enhancement module 1022.
2D指令流模块1021通过第二通道与2D指令流模块1013建立连接,用于接收视频图层的位置信息和指令集,根据视频图层的位置信息,将指令图层上与视频图层进行叠加的重叠位置设置为透明状态,然后将指令集中各个指令转换为用户进行互动时输入的文字、竖大拇指、火箭等图像,并在指令图层上进行显示。The 2D instruction stream module 1021 establishes a connection with the 2D instruction stream module 1013 through the second channel, and is used to receive the position information and instruction set of the video layer, and superimpose the instruction layer with the video layer according to the position information of the video layer The overlapping position of is set to a transparent state, and then each instruction in the instruction set is converted into text, thumbs, rockets and other images input by the user during interaction, and displayed on the instruction layer.
本申请实施例中,由于指令图层的形状和大小与显示屏103完全相同,所以在接收到视频图层的位置信息,根据视频图层的位置信息中坐标或边距,确定出视频图层和指令图层叠加时的视频流中视频图像显示区域,然后将该区域设置为透明状态,以便后续视频图层和指令图层叠加显示时,视频图层显示的视频流不被指令图层遮挡,显示效果如图3(a)所示。In the embodiment of this application, since the shape and size of the instruction layer are exactly the same as the display screen 103, the position information of the video layer is received, and the video layer is determined according to the coordinates or margins in the position information of the video layer. The video image display area in the video stream when superimposed with the instruction layer, and then set this area to a transparent state, so that when the subsequent video layer and instruction layer are superimposed and displayed, the video stream displayed by the video layer will not be obscured by the instruction layer , The display effect is shown in Figure 3(a).
另外,互动图像可以在指令图层的任意区域显示,也可以在视频图层和指令图层叠加时的重叠区域显示,本申请在此不作限定。In addition, the interactive image can be displayed in any area of the instruction layer, and can also be displayed in the overlapping area when the video layer and the instruction layer are superimposed, which is not limited in this application.
PQ增强模块1022通过第一通道与视频图层处理模块1012建立连接,用于通过识别视频图层上的标识识别出视频图层,然后对视频图层上显示的视频流中的视频图像进行PQ增强,在视频图层上显示增强后视频图像,如图3(b)所示。The PQ enhancement module 1022 establishes a connection with the video layer processing module 1012 through the first channel, and is used to identify the video layer by identifying the identifier on the video layer, and then perform PQ on the video image in the video stream displayed on the video layer Enhance, display the enhanced video image on the video layer, as shown in Figure 3(b).
最后第二芯片102将通过PQ增强模块1022进行PQ增强后的视频图层和通过2D指令流模块1021对指令图层进行处理后指令图层进行叠加处理,然后发送给显示屏103上,进行显示。Finally, the second chip 102 will use the PQ enhancement module 1022 to perform the PQ enhancement of the video layer and the 2D instruction stream module 1021 to process the instruction layer after the instruction layer is superimposed, and then send it to the display screen 103 for display .
本申请实施例中,终端设备在获取到视频流、指令集等数据后,通过第一芯片识别出视频流所处的视频图层后,通过第一通道发送给第二芯片,同时计算出视频图像在视频图层上显示的位置信息,然后将位置信息和指令集通过第二通道发送第二芯片,第二芯片在接收到视频流后,对视频图层上显示的视频图像进行PQ增强,在接收到位置信息和指令集后,将原指令图层的与视频图层叠加时相重叠的区域设置为透明状态,并在处理后的指令图层上显示交互图像,然后将进行PQ增强后的视频图层和处理后的指令图层进行叠加显示,从而得到高图像质量和显示互动内容的视频,实现用户在观看高图像质量的视频同时,还可以看到交互信息。In the embodiment of this application, after the terminal device obtains the video stream, instruction set and other data, it recognizes the video layer of the video stream through the first chip, then sends it to the second chip through the first channel, and calculates the video at the same time. The position information of the image displayed on the video layer, and then the position information and instruction set are sent to the second chip through the second channel. After receiving the video stream, the second chip performs PQ enhancement on the video image displayed on the video layer. After receiving the position information and the instruction set, set the overlapping area of the original instruction layer and the video layer to a transparent state, and display the interactive image on the processed instruction layer, and then perform PQ enhancement The video layer and the processed instruction layer are superimposed and displayed, thereby obtaining a video with high image quality and displaying interactive content, so that the user can see the interactive information while watching the video with high image quality.
需要说明的是,本申请上述实施例实现的硬件要求,不仅限于两个芯片,可以为任意个芯片。虽然上述是通过两个芯片实现,对于本领域人员可知,芯片一般也是由多个模块组成的,但是如果将第一芯片101和第二芯片102中的所有模块封装在一起,就可以认为上述实施例由一个芯片实现的。It should be noted that the hardware requirements implemented by the foregoing embodiments of the present application are not limited to two chips, but can be any number of chips. Although the above is implemented by two chips, it is known to those skilled in the art that the chip is generally composed of multiple modules, but if all the modules in the first chip 101 and the second chip 102 are packaged together, the above implementation can be considered The example is realized by a chip.
下面通过软件方面来讲述本申请技术方案。The technical solution of the present application will be described below through the software aspect.
图4为本申请实施例提供的一种图像处理方法的流程图。如图4所示,本申请提供的图像处理方法由终端设备执行,具体执行步骤如下:FIG. 4 is a flowchart of an image processing method provided by an embodiment of the application. As shown in Figure 4, the image processing method provided by this application is executed by a terminal device, and the specific execution steps are as follows:
步骤S401,获取视频信息。Step S401: Obtain video information.
其中,获取的视频信息包括视频流和指令集。视频流为用户观看的视频,指令集主要包括用户根据观看视频进行互动时输入的指令,如输入文字、点击“点赞”虚拟按键、点击“送礼物”虚拟按键等操作。智能电视100在接收到该指令集后,通过对各个指令进行转换,在指令图层显示文字、点赞对应的“竖大拇指”、送礼物对应的“火箭”等图像。Among them, the acquired video information includes a video stream and an instruction set. The video stream is the video watched by the user. The instruction set mainly includes the instructions that the user enters when interacting with the video, such as inputting text, clicking the "Like" virtual button, and clicking the "Send a gift" virtual button. After receiving the instruction set, the smart TV 100 converts each instruction to display images on the instruction layer, such as text, "thumbs up" corresponding to likes, and "rocket" corresponding to gifts.
步骤S403,确定视频图层和视频流在视频图层的位置信息。Step S403: Determine the position information of the video layer and the video stream on the video layer.
通常,显示屏上显示的视频图像是由包括视频图层、指令图层等图层显示的内容图像叠加形成的,且各个图层的大小和尺寸与显示屏完全相同。其中,视频图层是显示视频流中视频图像,指令图层是显示用于控制视频流播放的虚拟按键、播放视频流快慢的虚拟按键、用户进行交互时输入的指令对应的内容图像等等。Generally, the video image displayed on the display screen is formed by superimposing content images displayed by layers including video layers, instruction layers, etc., and the size and dimensions of each layer are exactly the same as those of the display screen. Among them, the video layer is to display video images in the video stream, and the instruction layer is to display virtual buttons for controlling the playback of the video stream, virtual buttons for playing the video stream, content images corresponding to instructions input when the user interacts, and so on.
本申请实施例中,通过视频图层识别技术,识别出视频图层,以便后续将视频图层上显示的视频流数据通过单独的一条通路进行传输;在识别出视频图层后,对视频图层打上标识,是为了后续在通过识别出视频图层,对视频图层上显示的对视频流进行PQ增强。In the embodiments of this application, the video layer is identified through the video layer recognition technology, so that the video stream data displayed on the video layer can be subsequently transmitted through a separate channel; after the video layer is identified, the video image The layer is marked with a mark for subsequent PQ enhancement of the video stream displayed on the video layer by identifying the video layer.
另外,视频流中的视频图像的尺寸通常与显示屏的尺寸是不完全相同的,所以通常我们观看视频时,在显示屏上显示的视频图像的上下两个区域有“黑色”部分,或视频图像的左右两个区域有“黑色”部分。因此,需要记录视频图像在显示屏显示的位置,以便后续在与指令图层叠加时,确定视频流显示的位置不会被指令图层遮挡。In addition, the size of the video image in the video stream is usually not exactly the same as the size of the display screen, so usually when we watch a video, the upper and lower areas of the video image displayed on the display screen have "black" parts, or the video There are "black" parts in the left and right areas of the image. Therefore, it is necessary to record the display position of the video image on the display screen, so that when it is superimposed with the instruction layer later, it is determined that the display position of the video stream will not be obscured by the instruction layer.
一种可能实施例中,当视频流中的视频图像和显示屏的尺寸不相同时,终端设备按照设定的要求将视频图像设置在视频图层的设定位置上,或将视频图像按照“居中”原则设置在视频图层的中心位置,或因视频图像过大或过小,对视频流中的视频图像进行放大或缩小处理,使得视频图像的左右长度与视频图层的左右长度相同、或让视频图像的上下宽度与视频图层的上下宽度相同后,设置在视频图层的设定位置上。在确定视频图像在视频图层显示的位置后,记录下视频图像在视频图层上的位置。In a possible embodiment, when the size of the video image in the video stream and the display screen are not the same, the terminal device sets the video image in the set position of the video layer according to the set requirements, or sets the video image in accordance with " The principle of "centering" is set at the center of the video layer, or because the video image is too large or too small, the video image in the video stream is enlarged or reduced so that the left and right lengths of the video image are the same as the left and right lengths of the video layer. Or make the up and down width of the video image the same as the up and down width of the video layer, and set it at the setting position of the video layer. After determining the display position of the video image on the video layer, record the position of the video image on the video layer.
其中,位置信息可以为坐标信息,例如,以视频图层的左下角为左边原点,记录视频图像在视频图层上的四个顶点的坐标;也可以为距离信息,例如,视频图像的左边框与视频图层的左边框之间的距离、视频图像的右边框与视频图层的右边框之间的距离、视频图像的上边框与视频图层的上边框之间的距离,以及视频图像的下边框与视频图层的下边框之间的距离。Wherein, the position information can be coordinate information, for example, the coordinates of the four vertices of the video image on the video layer are recorded with the lower left corner of the video layer as the left origin; it can also be distance information, for example, the left border of the video image The distance from the left border of the video layer, the distance between the right border of the video image and the right border of the video layer, the distance between the top border of the video image and the top border of the video layer, and the distance between the top border of the video image and the top border of the video layer. The distance between the bottom border and the bottom border of the video layer.
步骤S405,对视频图层上显示的视频流进行PQ增强。Step S405: Perform PQ enhancement on the video stream displayed on the video layer.
步骤S407,将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示。In step S407, the PQ enhanced video layer and the instruction layer are superimposed, and then sent to the display screen for display.
具体地,在得到视频流在视频图层上的位置信息后,由于指令图层的形状和大小与显示屏103完全相同,所以根据视频图层的位置信息中坐标或边距,确定出视频图层和指令图层叠加时的视频流中视频图像显示区域,然后将该区域设置为透明状态,以便后续视频图层和指令图层叠加显示时,视频图层显示的视频流不被指令图层遮挡。然后将指令集中各个指令转换为用户进行互动时输入的文字、竖大拇指、火箭等图像,并在指令图层上进行显示。Specifically, after the position information of the video stream on the video layer is obtained, since the shape and size of the instruction layer are exactly the same as the display screen 103, the video image is determined according to the coordinates or margins in the position information of the video layer. The video image display area in the video stream when the layer and instruction layer are superimposed, and then set this area to a transparent state, so that when the subsequent video layer and instruction layer are superimposed, the video stream displayed by the video layer will not be the instruction layer Occlude. Then, each instruction in the instruction set is converted into the text, thumbs up, rocket and other images input by the user during interaction, and displayed on the instruction layer.
另外,终端设备通过识别视频图层上的标识识别出视频图层,对视频图层上显示的视频流中的视频图像进行PQ增强,然后在视频图层上显示增强后视频图像。In addition, the terminal device recognizes the video layer by recognizing the mark on the video layer, performs PQ enhancement on the video image in the video stream displayed on the video layer, and then displays the enhanced video image on the video layer.
最后终端设备在显示屏上显示视频时,将通过PQ增强后的视频图层和进行处理后的指 令图层进行叠加在一起,发送至显示屏进行显示。本申请实施例中提供的一种图像处理方法,在得到视频流和指令集后,先识别出视频流显示的图层,在确定视频流在该图层显示的位置信息后,当视频流在视频图层显示时,对该视频流中的视频图像进行PQ增强,然后根据位置信息和指令集转换的互动内容,将指令图层的特定位置设置为透明状态,并显示互动内容,然后将进行PQ增强后的视频图层和处理后的指令图层进行叠加显示,从而得到高图像质量和显示互动内容的视频,实现用户在观看高图像质量的视频同时,还可以看到交互信息。Finally, when the terminal device displays the video on the display screen, the video layer enhanced by PQ and the processed instruction layer are superimposed together and sent to the display screen for display. The image processing method provided in the embodiment of the application first identifies the layer displayed by the video stream after obtaining the video stream and the instruction set. After determining the position information of the video stream displayed on the layer, when the video stream is When the video layer is displayed, perform PQ enhancement on the video image in the video stream, and then set the specific position of the instruction layer to a transparent state according to the position information and the interactive content converted by the instruction set, and display the interactive content, and then perform The PQ-enhanced video layer and the processed instruction layer are superimposed and displayed, thereby obtaining a video with high image quality and displaying interactive content, so that users can see the interactive information while watching the high-quality video.
图5为本申请实施例提供的一种图像处理装置的结构示意图。如图5所示,本申请实施例提供的图像处理装置500包括:获取单元501、确定单元503和处理单元505。其中,获取单元501用于获取视频信息,视频信息包括视频流和指令集,指令集包括用户观看视频进行交互时输入的指令;确定单元503用于确定视频图层和视频流在视频图层上显示的位置信息;处理单元505用于对视频图层上显示的视频流进行PQ增强;以及将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,指令图层用于显示由指令集中的各个指令转换的互动内容,指令图层包括根据位置信息确定的呈现透明状态的第一区域。每个单元具体的实现方案可参考之前实施例的介绍。FIG. 5 is a schematic structural diagram of an image processing device provided by an embodiment of the application. As shown in FIG. 5, the image processing apparatus 500 provided by the embodiment of the present application includes: an acquisition unit 501, a determination unit 503, and a processing unit 505. Wherein, the obtaining unit 501 is used to obtain video information. The video information includes a video stream and an instruction set. The instruction set includes instructions input when the user is watching the video for interaction; the determining unit 503 is used to determine that the video layer and the video stream are on the video layer. Displayed position information; the processing unit 505 is used to perform PQ enhancement on the video stream displayed on the video layer; and superimpose the PQ enhanced video layer and instruction layer, and then send it to the display screen for display, the instruction layer It is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes a first area in a transparent state determined according to position information. For the specific implementation scheme of each unit, refer to the introduction of the previous embodiment.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the embodiments of the present application.
此外,本申请实施例的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。In addition, various aspects or features of the embodiments of the present application can be implemented as methods, devices, or products using standard programming and/or engineering techniques. The term "article of manufacture" used in this application encompasses a computer program accessible from any computer-readable device, carrier, or medium. For example, computer-readable media may include, but are not limited to: magnetic storage devices (for example, hard disks, floppy disks, or tapes, etc.), optical disks (for example, compact discs (CD), digital versatile discs (DVD)) Etc.), smart cards and flash memory devices (for example, erasable programmable read-only memory (EPROM), cards, sticks or key drives, etc.). In addition, various storage media described herein may represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
在上述实施例中,图5中图像处理装置500可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心 等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。In the foregoing embodiment, the image processing apparatus 500 in FIG. 5 may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
应当理解的是,在本申请实施例的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that in the various embodiments of the embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution. The execution order of the processes should be determined by their functions and internal logic, and should not be dealt with. The implementation process of the embodiments of the present application constitutes any limitation.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者接入网设备等)执行本申请实施例各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art or the part of the technical solutions can be embodied in the form of a software product, and the computer software product is stored in a storage medium. , Including several instructions to make a computer device (which may be a personal computer, a server, or an access network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。The above are only specific implementations of the embodiments of the present application, but the protection scope of the embodiments of the present application is not limited to this. Anyone familiar with the technical field in the technical scope disclosed in the embodiments of the present application can easily Any change or replacement should be included in the protection scope of the embodiments of the present application.

Claims (16)

  1. 一种图像处理方法,所述方法由终端设备执行,其特征在于,包括:An image processing method, which is executed by a terminal device, and is characterized in that it includes:
    获取视频信息,所述视频信息包括视频流和指令集,所述指令集包括用户观看视频进行交互时输入的指令;Acquiring video information, where the video information includes a video stream and an instruction set, and the instruction set includes instructions input by a user when watching a video for interaction;
    确定视频图层和所述视频流在所述视频图层上显示的位置信息;Determining the position information of a video layer and the video stream displayed on the video layer;
    对所述视频图层上显示的所述视频流进行图像质量PQ增强;Performing image quality PQ enhancement on the video stream displayed on the video layer;
    将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,所述指令图层用于显示由所述指令集中的各个指令转换的互动内容,所述指令图层包括根据所述位置信息确定的呈现透明状态的第一区域。The PQ enhanced video layer and instruction layer are superimposed, and then sent to the display screen for display. The instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes The first area in a transparent state is determined according to the position information.
  2. 根据权利要求1所述的方法,其特征在于,所述确定视频图层,包括:The method according to claim 1, wherein said determining a video layer comprises:
    通过视频图层识别技术识别出所述视频图层。The video layer is identified through the video layer recognition technology.
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:The method according to claim 2, wherein the method further comprises:
    对所述视频图层上增加标识,所述标识用于确定进行PQ增强的数据为所述视频图层上显示的视频流。An identifier is added to the video layer, and the identifier is used to determine that the data for PQ enhancement is the video stream displayed on the video layer.
  4. 根据权利要求1所述的方法,其特征在于,所述确定所述视频流在所述视频图层上显示的位置信息,包括:The method according to claim 1, wherein the determining the position information of the video stream displayed on the video layer comprises:
    根据所述视频流中的视频图像的尺寸和所述视频图层的尺寸,确定所述视频流中的视频图像在所述视频图层上显示的位置;Determine the display position of the video image in the video stream on the video layer according to the size of the video image in the video stream and the size of the video layer;
    记录所述视频流中的视频图像在所述视频图层上的坐标信息或边距信息。Recording the coordinate information or margin information of the video image in the video stream on the video layer.
  5. 一种图像处理装置,其特征在于,包括:An image processing device, characterized in that it comprises:
    获取单元,用于获取视频信息,所述视频信息包括视频流和指令集,所述指令集包括用户观看视频进行交互时输入的指令;An acquiring unit, configured to acquire video information, the video information including a video stream and an instruction set, and the instruction set includes instructions input by a user when watching a video for interaction;
    确定单元,用于确定视频图层和所述视频流在所述视频图层上显示的位置信息;The determining unit is used to determine the position information of the video layer and the video stream displayed on the video layer;
    处理单元,用于对所述视频图层上显示的所述视频流进行PQ增强;以及A processing unit, configured to perform PQ enhancement on the video stream displayed on the video layer; and
    将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,所述指令图层用于显示由所述指令集中的各个指令转换的互动内容,所述指令图层包括根据所述位置信息确定的呈现透明状态的第一区域。The PQ enhanced video layer and instruction layer are superimposed, and then sent to the display screen for display. The instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes The first area in a transparent state is determined according to the position information.
  6. 根据权利要求5所述的装置,其特征在于,所述确定单元具体用于,通过视频图层识别技术识别出所述视频图层。The device according to claim 5, wherein the determining unit is specifically configured to recognize the video layer through a video layer recognition technology.
  7. 根据权利要求6所述的装置,其特征在于,所述确定单元,还用于对所述视频图层上增加标识,所述标识用于确定进行PQ增强的数据为所述视频图层上显示的视频流。The device according to claim 6, wherein the determining unit is further configured to add an identifier to the video layer, and the identifier is used to determine that the data for PQ enhancement is displayed on the video layer Video stream.
  8. 根据权利要求5所述的装置,其特征在于,所述确定单元具体用于,根据所述视频流中的视频图像的尺寸和所述视频图层的尺寸,确定所述视频流中的视频图像在所述视频图层上显示的位置;The device according to claim 5, wherein the determining unit is specifically configured to determine the video image in the video stream according to the size of the video image in the video stream and the size of the video layer The position displayed on the video layer;
    记录所述视频流中的视频图像在所述视频图层上的坐标信息或边距信息。Recording the coordinate information or margin information of the video image in the video stream on the video layer.
  9. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    收发器,获取视频信息,所述视频信息包括视频流和指令集,所述指令集包括用户观 看视频进行交互时输入的指令;A transceiver to obtain video information, where the video information includes a video stream and an instruction set, and the instruction set includes an instruction input by a user while watching the video and interacting;
    第一芯片,确定视频图层和所述视频流在所述视频图层上显示的位置信息;The first chip determines the position information of the video layer and the video stream displayed on the video layer;
    第二芯片,对所述视频图层上显示的所述视频流进行PQ增强;The second chip performs PQ enhancement on the video stream displayed on the video layer;
    将PQ增强后的视频图层和指令图层进行叠加,然后发送至显示屏上显示,所述指令图层用于显示由所述指令集中的各个指令转换的互动内容,所述指令图层包括根据所述位置信息确定的呈现透明状态的第一区域。The PQ enhanced video layer and instruction layer are superimposed, and then sent to the display screen for display. The instruction layer is used to display interactive content converted by each instruction in the instruction set, and the instruction layer includes The first area in a transparent state is determined according to the position information.
  10. 根据权利要求9所述的电子设备,其特征在于,所述第一芯片和所述第二芯片之间包括第一通道和第二通道,The electronic device according to claim 9, wherein the first chip and the second chip comprise a first channel and a second channel,
    所述第一通道,用于将所述第一芯片识别出的所述视频流发送给所述第二芯片;The first channel is used to send the video stream identified by the first chip to the second chip;
    所述第二通道,用于将所述第一芯片的所述视频流的位置信息和所述指令集发送给所述第二芯片。The second channel is used to send the position information of the video stream of the first chip and the instruction set to the second chip.
  11. 根据权利要求10所述的电子设备,其特征在于,所述第一通道为数据通道DP,所述第二通道为快捷外围部件互连标准PCIe通道。11. The electronic device according to claim 10, wherein the first channel is a data channel DP, and the second channel is a standard PCIe channel for quick peripheral component interconnection.
  12. 根据权利要求9所述的电子设备,其特征在于,所述第一芯片具体用于,通过视频图层识别技术识别出所述视频图层。The electronic device according to claim 9, wherein the first chip is specifically configured to recognize the video layer through a video layer recognition technology.
  13. 根据权利要求9所述的电子设备,其特征在于,所述第一芯片,还用于对所述视频图层上增加标识,所述标识用于确定所述第二芯片进行PQ增强的数据为所述视频图层上显示的视频流。The electronic device according to claim 9, wherein the first chip is further used to add an identifier to the video layer, and the identifier is used to determine that the data for PQ enhancement of the second chip is The video stream displayed on the video layer.
  14. 根据权利要求9所述的电子设备,其特征在于,所述第一芯片具体用于,根据所述视频流中的视频图像的尺寸和所述视频图层的尺寸,确定所述视频流中的视频图像在所述视频图层上显示的位置;The electronic device according to claim 9, wherein the first chip is specifically configured to determine the size of the video image in the video stream according to the size of the video image in the video stream and the size of the video layer. The display position of the video image on the video layer;
    记录所述视频流中的视频图像在所述视频图层上的坐标信息或边距信息。Recording the coordinate information or margin information of the video image in the video stream on the video layer.
  15. 一种计算机可读存储介质,用于存储指令/可执行代码,当所述指令/可执行代码被电子设备的处理器执行时,使得所述电子设备实现权利要求1-4中任一项所述的方法。A computer-readable storage medium for storing instructions/executable code. When the instructions/executable code are executed by the processor of an electronic device, the electronic device realizes the requirements described in any one of claims 1-4. The method described.
  16. 一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1-4任一所述的方法。A computer program product containing instructions that, when the instructions run on a computer, cause the computer to execute the method according to any one of claims 1-4.
PCT/CN2021/080248 2020-04-26 2021-03-11 Image processing method and apparatus, and electronic device WO2021218430A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010338390.9 2020-04-26
CN202010338390.9A CN111565337A (en) 2020-04-26 2020-04-26 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
WO2021218430A1 true WO2021218430A1 (en) 2021-11-04

Family

ID=72073130

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080248 WO2021218430A1 (en) 2020-04-26 2021-03-11 Image processing method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN111565337A (en)
WO (1) WO2021218430A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111565337A (en) * 2020-04-26 2020-08-21 华为技术有限公司 Image processing method and device and electronic equipment
CN113271429A (en) * 2020-09-30 2021-08-17 常熟九城智能科技有限公司 Video conference information processing method and device, electronic equipment and system
CN112328339B (en) * 2020-10-10 2024-04-30 Oppo(重庆)智能科技有限公司 Notification message display method and device, storage medium and electronic equipment
CN113625983A (en) * 2021-08-10 2021-11-09 Oppo广东移动通信有限公司 Image display method, image display device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040075652A1 (en) * 2002-10-17 2004-04-22 Samsung Electronics Co., Ltd. Layer editing method and apparatus in a pen computing system
CN101321240A (en) * 2008-06-25 2008-12-10 华为技术有限公司 Method and device for multi-drawing layer stacking
CN102428463A (en) * 2009-05-28 2012-04-25 贺利实公司 Multimedia system providing database of shared text comment data indexed to video source data and related methods
US20120260195A1 (en) * 2006-01-24 2012-10-11 Henry Hon System and method to create a collaborative web-based multimedia contextual dialogue
CN106412621A (en) * 2016-09-28 2017-02-15 广州华多网络科技有限公司 Video display method and device of network studio, control method and related equipment
CN106488296A (en) * 2016-10-18 2017-03-08 广州酷狗计算机科技有限公司 A kind of method and apparatus of display video barrage
CN110427094A (en) * 2019-07-17 2019-11-08 Oppo广东移动通信有限公司 Display methods, device, electronic equipment and computer-readable medium
CN110536151A (en) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 The synthetic method and device of virtual present special efficacy, live broadcast system
CN111565337A (en) * 2020-04-26 2020-08-21 华为技术有限公司 Image processing method and device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040075652A1 (en) * 2002-10-17 2004-04-22 Samsung Electronics Co., Ltd. Layer editing method and apparatus in a pen computing system
US20120260195A1 (en) * 2006-01-24 2012-10-11 Henry Hon System and method to create a collaborative web-based multimedia contextual dialogue
CN101321240A (en) * 2008-06-25 2008-12-10 华为技术有限公司 Method and device for multi-drawing layer stacking
CN102428463A (en) * 2009-05-28 2012-04-25 贺利实公司 Multimedia system providing database of shared text comment data indexed to video source data and related methods
CN106412621A (en) * 2016-09-28 2017-02-15 广州华多网络科技有限公司 Video display method and device of network studio, control method and related equipment
CN106488296A (en) * 2016-10-18 2017-03-08 广州酷狗计算机科技有限公司 A kind of method and apparatus of display video barrage
CN110427094A (en) * 2019-07-17 2019-11-08 Oppo广东移动通信有限公司 Display methods, device, electronic equipment and computer-readable medium
CN110536151A (en) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 The synthetic method and device of virtual present special efficacy, live broadcast system
CN111565337A (en) * 2020-04-26 2020-08-21 华为技术有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN111565337A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
WO2021218430A1 (en) Image processing method and apparatus, and electronic device
EP3996381A1 (en) Cover image determination method and apparatus, and device
US9247199B2 (en) Method of providing information-of-users' interest when video call is made, and electronic apparatus thereof
TWI556639B (en) Techniques for adding interactive features to videos
US8488914B2 (en) Electronic apparatus and image processing method
WO2021082639A1 (en) Method and apparatus for operating user interface, electronic device, and storage medium
US10181203B2 (en) Method for processing image data and apparatus for the same
US11205254B2 (en) System and method for identifying and obscuring objectionable content
KR20150034724A (en) Image identification and organisation according to a layout without user intervention
US20130279811A1 (en) Method and system for automatically selecting representative thumbnail of photo folder
US20130262536A1 (en) Techniques for intelligent media show across multiple devices
US20160381291A1 (en) Electronic device and method for controlling display of panorama image
US20240119082A1 (en) Method, apparatus, device, readable storage medium and product for media content processing
US8244005B2 (en) Electronic apparatus and image display method
AU2019427824B2 (en) Interactive interface for identifying defects in video content
US20180268049A1 (en) Providing a heat map overlay representative of user preferences relating to rendered content
US20120314043A1 (en) Managing multimedia contents using general objects
KR102426089B1 (en) Electronic device and Method for generating summary image of electronic device
TWI514319B (en) Methods and systems for editing data using virtual objects, and related computer program products
US20210377454A1 (en) Capturing method and device
US20120278847A1 (en) Electronic apparatus and image processing method
CN112995539B (en) Mobile terminal and image processing method
TWM560053U (en) Editing device for integrating augmented reality online
US20230368429A1 (en) Electronic apparatus and controlling method thereof
WO2022068067A1 (en) Video conference information processing method and apparatus, electronic device and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21796761

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21796761

Country of ref document: EP

Kind code of ref document: A1