CN113542623A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN113542623A
CN113542623A CN202010313697.3A CN202010313697A CN113542623A CN 113542623 A CN113542623 A CN 113542623A CN 202010313697 A CN202010313697 A CN 202010313697A CN 113542623 A CN113542623 A CN 113542623A
Authority
CN
China
Prior art keywords
image
video
image data
data
osd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010313697.3A
Other languages
Chinese (zh)
Inventor
郑超
范泽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010313697.3A priority Critical patent/CN113542623A/en
Publication of CN113542623A publication Critical patent/CN113542623A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses an image processing method and a related device, wherein the method comprises the following steps: identifying data parameters of at least one original image data when the input of the at least one original image data is detected; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting at least one first OSD image into a first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting at least one first Video image into a second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing at least one second Video image and at least one second OSD image through a Video fusion module to obtain at least one target Video; and outputting the at least one target video to a display screen by an output module. The embodiment of the application is beneficial to avoiding the problems of deformation and tearing of the on-screen display area caused by frame insertion of the dynamic display content of the video, reducing the frame insertion operation content and reducing the power consumption.

Description

Image processing method and related device
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to an image processing method and a related apparatus.
Background
With the development of technology, people can watch videos more and more conveniently, and short video applications are endless. Because the 24P/30P video recording format is basically adopted in the video shooting at present, the exposure time is longer, the 24P exists, the picture has fine pause feeling, and abnormal display may be generated due to the operation of the user on the electronic equipment when the video is subjected to the frame interpolation processing. The current video frame interpolation method usually directly interpolates an intermediate frame by using various combinations of front and rear frames, which may cause the phenomena of jitter, tearing or serious blurring when the amplitude motion of a video scene is large.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device, which aim to avoid the problem of abnormal on-screen display effect caused by frame insertion in high-frame-rate video playing, reduce the frame insertion operation content and reduce the power consumption.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, where the electronic device includes an AP end, a display screen, and an image processing chip, where the image processing chip includes at least one first mobile industry processor interface module, at least one second mobile industry processor interface module, a video fusion module, and an output module, the AP end is connected to the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module, the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module are connected to the video fusion module, the video fusion module is connected to the output module, and the output module is connected to the display screen; the method comprises the following steps:
when at least one original image data input is detected, identifying data parameters of the at least one original image data, wherein the data parameters comprise data types and the number of each data type, and the data types comprise first Video image data and first OSD image data;
determining at least one first Video image data and at least one first OSD image data according to the data parameter;
inputting the at least one first OSD image into the first mobile industry processor interface module for processing to obtain at least one second OSD image;
inputting the at least one first Video image into the second mobile industry processor interface module for processing to obtain at least one second Video image;
synthesizing the at least one second Video image and the at least one second OSD image through the Video fusion module to obtain at least one target Video;
outputting the at least one target video to the display screen by the output module.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to an electronic device, where the electronic device includes an AP end, a display screen, and an image processing chip, the image processing chip includes at least one first mobile industry processor interface module, at least one second mobile industry processor interface module, a video fusion module, and an output module, the AP end is connected to the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module respectively, the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module are connected to the video fusion module respectively, the video fusion module is connected to the output module, and the output module is connected to the display screen; the apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for identifying data parameters of at least one original image data when the input of the at least one original image data is detected, wherein the data parameters comprise data types and the number of each data type included in the original image data, and the data types comprise first Video image data and first OSD image data; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting the at least one first OSD image into the first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting the at least one first Video image into the second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing the at least one second Video image and the at least one second OSD image through the Video fusion module to obtain at least one target Video; and outputting the at least one target video to the display screen by the output module.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps described in any one of the methods of the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, when the electronic device detects that at least one original image data is input, a data parameter of the at least one original image data is identified; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting at least one first OSD image into a first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting at least one first Video image into a second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing at least one second Video image and at least one second OSD image through a Video fusion module to obtain at least one target Video; and outputting the at least one target video to a display screen by an output module. Therefore, the target Video is obtained by respectively processing the first OSD image and the first Video image through the two-way mobile industrial processor interface, the problems of deformation and tearing of an on-screen display area caused by frame insertion of dynamic display contents of the Video are avoided, the frame insertion operation contents are reduced, and the power consumption is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a schematic diagram illustrating an OSD image display abnormality according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an architecture of an image processing chip according to an embodiment of the present disclosure;
fig. 4A is a display interface diagram for video editing according to an embodiment of the present application;
FIG. 4B is an interface diagram of a split-screen application display provided by an embodiment of the present application;
FIG. 4C is a schematic interface diagram of a multi-window display provided by an embodiment of the present application;
fig. 5 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a block diagram of functional units of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1A, fig. 1A is a schematic structural diagram of an electronic device 100, where the electronic device 100 includes a display screen 110, an image processing chip 120 and an AP terminal 130, the image processing chip 120 includes a first mobile industry processor interface module 121, a second mobile industry processor interface module 122, a video fusion module 123 and an output module 124, the image processing chip may be an Iris chip, the first mobile industry processor interface module 121 and the second mobile industry processor interface module 122 are respectively connected to the video fusion module 123, the video fusion module 123 is connected to the output module 124, and the output module 124 is connected to the display screen 110. It is obvious that the electronic device may be an electronic device with a video image display function, and the electronic device may include various handheld devices (smart phones, tablet computers, etc.) with a wireless communication function, vehicle-mounted devices (navigators, vehicle-mounted refrigerators, vehicle-mounted dust collectors, etc.), wearable devices (smart watches, smart bracelets, wireless headsets, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to wireless modems, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like.
As shown in fig. 1B, fig. 1B is a schematic diagram of an OSD image display abnormality provided by the embodiment of the present application, and when the first Video image data is interpolated from the position a to the position B, the first OSD image data (grid hexagram) in the Video scene is jittered and torn. At present, a video source is basically mainly 24fps/30fps, a smart phone does not improve the frame rate of the video, and the prior art scheme is still in a missing state in the aspect of improving the frame rate of the video source; moreover, when the current video frame interpolation method is used for frame interpolation, the phenomenon of shaking, tearing or serious blurring can be caused when the amplitude motion of a video scene is large.
In view of the above problem, the present application provides an image processing method, and the following describes an embodiment of the present application in detail with reference to the accompanying drawings.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image processing method provided in an embodiment of the present application, and is applied to the electronic device shown in fig. 1A, where the electronic device includes an AP end, a display screen, and an image processing chip, the image processing chip includes a first mobile industry processor interface module, a second mobile industry processor interface module, a video fusion module, and an output module, the AP end is respectively connected to the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module, the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module are respectively connected to the video fusion module, the video fusion module is connected to the output module, and the output module is connected to the display screen; the method comprises the following steps:
s201, when the electronic equipment detects that at least one original image data is input, identifying data parameters of the at least one original image data, wherein the data parameters comprise data types and the number of each data type included in the original image data, and the data types comprise first Video image data and first OSD image data;
the first osd (onscreen display) image is a text and graphic image superimposed on the video signal.
S202, the electronic equipment determines at least one first Video image data and at least one first OSD image data according to the data parameters;
s203, the electronic equipment inputs the at least one first OSD image into the first mobile industry processor interface module for processing to obtain at least one second OSD image;
the first Mobile Industry Processor Interface (MIPI) module is MIPI RX1, and the first Mobile Industry Processor interface module does not perform framing processing on the first OSD image data.
S204, the electronic equipment inputs the at least one first Video image into the second mobile industry processor interface module for processing to obtain at least one second Video image;
the second mobile industry processor interface module is MIPI RX0, and performs frame interpolation processing on the first Video image data (Video source layer data).
S205, the electronic device synthesizes the at least one second Video image and the at least one second OSD image through the Video fusion module to obtain at least one target Video;
and after receiving the second OSD image and the second Video image sent by the first mobile industry processor interface module and the second mobile industry processor interface module, the Video fusion module synthesizes the second OSD image and the second Video image to obtain the target Video after frame insertion.
S206, the electronic equipment outputs the at least one target video to the display screen through the output module.
The output module is an MIPI TX module, and the target video is output to a display screen by the MIPI TX module.
It can be seen that, in the embodiment of the present application, when the electronic device detects that at least one original image data is input, a data parameter of the at least one original image data is identified; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting at least one first OSD image into a first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting at least one first Video image into a second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing at least one second Video image and at least one second OSD image through a Video fusion module to obtain at least one target Video; and outputting the at least one target video to a display screen by an output module. Therefore, the electronic equipment identifies and obtains the first OSD image and the first Video image according to the original image data, and then respectively processes the first OSD image and the first Video image to obtain the target Video, so that the problem of abnormal on-screen display effect caused by frame insertion in high-frame-rate Video playing is solved, the frame insertion operation content is reduced, and the power consumption is reduced.
In one possible example, determining at least one first Video image data and at least one first OSD image data according to the data parameter includes: determining layering state information and display area information according to the data parameters; and determining the first Video image and the first OSD image according to the layering state information and the display area information.
The at least one original image data may be at least one original image data that has been layered at the time of input, or may be at least one original image data that has no layering, and the at least one original image data that has no layering further includes at least one original image data whose display content is only displayed in a partial area of the screen, for example, a non-full-screen display state, a split-screen application display state, and the like.
In a specific implementation, the OSD image in the layered at least one original image data is generally located at a different layer from the Video image in the original image data, such as a picture Video progress bar, a pop-up screen, and the like, while the OSD image in the non-layered at least one original image data is generally located at the same layer as the Video image in the original image data.
As can be seen, in this example, the electronic device can determine the first Video image and the first OSD image according to the parameter information of the input at least one original image data, which is beneficial to implementing the differentiated processing of different image layers of the Video, avoiding the deformation and tearing of the OSD image layer, and saving the power consumption of the image processing chip.
In one possible example, the image processing chip further comprises an image area identification module, the image area identification module connecting the first mobile industry processor interface module and the second mobile industry processor interface module; determining the first Video image and the first OSD image according to the hierarchical state information and the display area information, including: when the at least one original image data is determined to be layered according to the layering state information, directly obtaining a first Video image and a first OSD image of the at least one original image data; and when the at least one original image data is determined to be non-layered according to the layering state information, detecting and acquiring a first Video image and a first OSD image of the at least one original image data through the image area identification module.
As shown in fig. 3, fig. 3 is a schematic diagram of an architecture of an image processing chip, and the image processing chip further includes an image area identification module 125.
When the at least one original image data is layered at least one original image data, receiving a first OSD image of the original display data written by the AP end through a first mobile industry processor interface module in the two-way MIPI, and receiving a first Video image of the original display data written by the AP end through a second mobile industry processor interface module in the two-way MIPI, wherein for example, the common first OSD image types include the contents of a barrage and a short Video title, a praise, an evaluation and the like, when the at least one original image data content Video and the OSD are layered, the display content is processed in a sub-region mode, namely, the barrage/common display content is input by MIPIRX1, and the Video content is input by MIPIRX0, and then signal synthesis is performed after data processing is performed separately; when the at least one original image data is the at least one original image data without layering, the at least one original image data is written through the first mobile industry processor interface module and/or the second mobile industry processor interface module, then the image area identification module identifies a first Video image and a first OSD image in the at least one original image data, the image area identification module sends the first Video image to the second mobile industry processor interface module for processing, and the first OSD image is sent to the first mobile industry processor interface module for processing.
In a specific implementation, the image area identifying module identifies the first Video image and the first OSD image in the at least one original image data, where a signal of the first OSD image area generally does not change when the at least one original image data is written through the first mobile industry processor interface module and/or the second mobile industry processor interface module, and identifies the input at least one original image data through the image processing chip, for example, a data area without position change in a long time is set as the first OSD image area, and an area with position change is set as the first Video image area.
As can be seen, in this example, the electronic device may identify the first OSD image and the first Video image in the at least one original image data without layering through the image region identification module according to determining whether the at least one original image data has been separated into the first OSD image and the first Video image, which is beneficial to implementing differentiated processing of different image layers of a Video, avoiding deformation and tearing of the OSD image layer, and saving power consumption of the image processing chip.
In one possible example, when it is determined that the at least one original image data is non-layered according to the layering state information, detecting and acquiring a first Video image and a first OSD image of the at least one original image data by the image area identification module includes: when the input of at least one original image data of the at least one original image data is detected, acquiring image information through the image area identification module, wherein the image information comprises image area information and image position information; and determining a first Video image and a first OSD image of the at least one original image data according to the image information.
The at least one original image data without the hierarchy comprises at least one original image data displayed in a full screen mode and at least one original image data displayed in a non-full screen mode. The method comprises the steps that a first Video image and a first OSD image of an image are identified through an image area identification module for at least one original image data displayed in a full screen mode, and the content of the at least one original image data is judged through an image processing chip, for example, the original image data is determined to be the first OSD image when judging the content of a bullet screen and the title, the like and the evaluation of a short Video, or the original image data is determined to be the first OSD image when the content of the bullet screen and the title, the like and the evaluation of the short Video do not change positions within a period of time. And identifying a first Video image and a first OSD image of the image by an image area identification module for at least one original image data which is not displayed in a full screen, identifying that a display content area of the Video display data is not changed, and determining the Video display data as the first Video image.
As can be seen, in this example, the electronic device can identify the first Video image and the first OSD image of the at least one original image data without layering based on the image region identification module, which is beneficial to implementing differentiated processing of different image layers of a Video, avoiding deformation and tearing of the OSD image layer, and saving power consumption of the image processing chip.
In one possible example, when the image information includes image region information, the determining a first Video image and a first OSD image of the at least one original image data according to the image information includes: determining a first image area with position change in the at least one original image data within a preset time, and setting the first image area as a first Video image; determining a second image area without position change within a preset time in the at least one original image data, and setting the second image area as a first OSD image.
When the data of at least one original image data is not layered, the judgment is carried out according to the data content. The signal of the OSD area in at least one input original image data does not change generally, and the first OSD image and the first Video image are distinguished by identifying the position of the area which changes in the image.
In a specific implementation, data with no position change within a preset time is used as a first OSD image of at least one original image data, for example, the position of an OSD image region such as a video progress bar, a video header, etc. is not changed, and the first OSD image can be distinguished. The preset time can be set according to the requirements of users.
As can be seen, in this example, the electronic device may determine the first OSD image and the first Video image of the at least one original image data without layering through the image processing chip, which is beneficial to implementing differentiated processing of different image layers of a Video, avoiding deformation and tearing of the OSD image layer, and saving power consumption of the image processing chip.
In one possible example, when the image information includes image position information, the determining a first Video image and a first OSD image of the at least one original image data according to the image information includes: determining at least one original image data display area with position change in the at least one original image data within Video playing time, and setting the at least one original image data display area as a first Video image; determining an interface static display area without position change in the at least one original image data within video playing time, and setting the interface static display area as a first OSD image.
When the at least one original image data is displayed in a non-full screen mode, a display area of the at least one original image data is determined, the display area of the at least one original image data is set to be a first Video image area, and other interface display areas except the display area of the at least one original image data are set to be first OSD images. In the image processing process, only the first Video image displayed in the partial area of the interface is processed, so that the problems of OSD tearing and deformation are solved, and the image processing chip only performs frame insertion processing on the partial area of the Video image picture, so that the power consumption of the chip and the cruising ability of the whole machine are improved.
In a specific implementation, as shown in fig. 4A, fig. 4A is a display interface diagram for Video editing, as shown in the figure, when editing a Video, at least one original image data display area displays at least one input original image data, and an interface static display area is an editing interface for displaying at least one input original image data, and then at least one original image data display area is used as a first Video image area, and other editing areas (interface border, editing list and icons, fonts, return, reply, discard, update, save and other buttons) are used as a first OSD image area. As shown in fig. 4B, fig. 4B is an interface diagram displayed by a split-screen application, where at least one original image data display area in a display area of the split-screen application is a first Video image area, and other areas (display interface and application icons such as application 1, application 2, and application 3) are first OSD image areas.
Therefore, in this example, the electronic device can determine the first Video image and the first OSD image area according to the at least one original image data display interface which is not displayed in a full screen mode, so that OSD tearing and deformation are avoided, and power consumption of a chip and cruising ability of a whole machine are improved.
In one possible example, after determining at least one first Video image data and at least one first OSD image data according to the data parameter, the method further includes: detecting window information of the display interface; and if the fact that at least one window respectively displays at least one piece of original image data is detected, determining a first Video image and a first OSD image of each window according to at least one original image data image input by each window.
When only one window displays at least one piece of original image data, the at least one piece of original image data is input into the interface module of the mobile industry processor to be processed to obtain a target video.
When a plurality of windows exist in one display interface to display a plurality of pieces of at least one piece of original image data, different pieces of at least one piece of original image data can be input through a plurality of interface modules of the mobile industry processor aiming at different display windows respectively, then the data of the at least one piece of original image data are processed respectively, and the rest pictures are not interfered. As shown in fig. 4C, fig. 4C is an interface diagram of a multi-window display, in which a first OSD image and a first Video image of at least one original image data (Video 1, Video 2, Video 3, and Video 4) of different areas are input through different first and second mobile industry processor interface modules. The method can process different types of data simultaneously, ensures the image display quality, and has no problems of image tearing and the like caused by different motion vectors of inserted frames in different areas of the image because the data processing of each part is independent.
In specific implementation, the display images of the left half screen and the right half screen of the split screen can be input according to different first mobile industry processor interface modules and second mobile industry processor interface modules, subjected to different frame insertion processing, and then output to the display screen through the output module.
In a specific implementation, the number of channels input by the image processing chip MIPIRX is not limited.
As can be seen, in this example, the electronic device may perform different frame insertion processing on the basis of multiple original images displayed in multiple windows or multiple split screens, so as to avoid tearing of the picture and ensure the diversity of data processing and the picture display quality.
Please refer to fig. 5 in accordance with the embodiment shown in fig. 2, where fig. 5 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and is applied to the electronic device shown in fig. 1A, where the electronic device includes an AP terminal, a display screen, and an image processing chip, the image processing chip includes a first mobile industry processor interface module, a second mobile industry processor interface module, a video fusion module, and an output module, the first mobile industry processor interface module and the second mobile industry processor interface module are respectively connected to the video fusion module, the video fusion module is connected to the output module, and the output module is connected to the display screen; as shown in the figure, the image processing method includes:
s501, when the electronic equipment detects that at least one original image data is input, identifying data parameters of the at least one original image data, wherein the data parameters comprise data types and the number of each data type, and the data types comprise first Video image data and first OSD image data;
s502, the electronic equipment determines hierarchical state information and display area information according to the data parameters;
s503, the electronic device determines the first Video image and the first OSD image according to the hierarchical state information and the display area information;
s504, the electronic equipment inputs the first OSD image into the first mobile industry processor interface module for processing to obtain a second OSD image;
s505, the electronic equipment inputs the first Video image into the second mobile industry processor interface module for processing to obtain a second Video image;
s506, the electronic equipment synthesizes the second Video image and the second OSD image through the Video fusion module to obtain a target Video;
s507, the electronic equipment outputs the target video to the display screen through the output module.
It can be seen that, in the embodiment of the present application, when the electronic device detects that at least one original image data is input, a data parameter of the at least one original image data is identified; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting at least one first OSD image into a first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting at least one first Video image into a second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing at least one second Video image and at least one second OSD image through a Video fusion module to obtain at least one target Video; and outputting the at least one target video to a display screen by an output module. Therefore, the target Video is obtained by respectively processing the first OSD image and the first Video image through the two-way mobile industrial processor interface, the problems of deformation and tearing of an on-screen display area caused by frame insertion of dynamic display contents of the Video are avoided, the frame insertion operation contents are reduced, and the power consumption is reduced.
In addition, the electronic equipment can determine the first Video image and the first OSD image according to the parameter information of the input at least one original image data, which is beneficial to realizing the differential processing of different image layers of the Video, avoiding the deformation and the tearing of the OSD image layer and saving the power consumption of an image processing chip.
In accordance with the embodiments shown in fig. 2 and fig. 5, please refer to fig. 6, fig. 6 is a schematic structural diagram of an electronic device 600 according to an embodiment of the present application, and as shown in the figure, the electronic device 600 includes an application processor 610, a memory 620, a communication interface 630, and one or more programs 621, where the one or more programs 621 are stored in the memory 620 and configured to be executed by the application processor 610, and the one or more programs 621 include instructions for performing the following steps;
determining a first Video image and a first OSD image according to at least one input original image data image, wherein the first Video image is a Video dynamic image, and the first OSD image is an on-screen display image;
inputting the first OSD image into the first mobile industry processor interface module for processing to obtain a second OSD image;
inputting the first Video image into the second mobile industry processor interface module for processing to obtain a second Video image;
synthesizing the second Video image and the second OSD image through the Video fusion module to obtain a target Video;
and outputting the target video to the display screen by the output module.
It can be seen that, in the embodiment of the present application, when the electronic device detects that at least one original image data is input, a data parameter of the at least one original image data is identified; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting at least one first OSD image into a first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting at least one first Video image into a second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing at least one second Video image and at least one second OSD image through a Video fusion module to obtain at least one target Video; and outputting the at least one target video to a display screen by an output module. Therefore, the target Video is obtained by respectively processing the first OSD image and the first Video image through the two-way mobile industrial processor interface, the problems of deformation and tearing of an on-screen display area caused by frame insertion of dynamic display contents of the Video are avoided, the frame insertion operation contents are reduced, and the power consumption is reduced.
In one possible example, in the aspect of determining the first Video image and the first OSD image according to the input at least one original image data image, the instructions in the program are specifically configured to: identifying parameter information of the at least one original image data, the parameter information including hierarchical state information and display area information; and determining the first Video image and the first OSD image according to the parameter information.
In one possible example, the image processing chip further comprises an image area identification module, the image area identification module connecting the first mobile industry processor interface module and the second mobile industry processor interface module; when the parameter information is hierarchical state information, in the aspect of determining the first Video image and the first OSD image according to the parameter information, the instruction in the program is specifically configured to perform the following operations: identifying hierarchical state information of an image of the at least one original image data; when the layering state information is layering, acquiring a first Video image and a first OSD image of the at least one original image data; and when the layering state information is no layering, identifying a first Video image and a first OSD image of the at least one original image data through the image area identification module.
In one possible example, when the hierarchical state information is no hierarchical layer, in terms of identifying the first Video image and the first OSD image of the at least one original image data by the image region identification module, the instructions in the program are specifically configured to perform the following operations: when the input of the at least one original image data is detected, acquiring image information through the image area identification module, wherein the image information comprises image area information and image position information; and determining a first Video image and a first OSD image of the at least one original image data according to the image information.
In one possible example, when the image information includes image region information, in the aspect of determining the first Video image and the first OSD image of the at least one original image data according to the image information, the instructions in the program are specifically configured to: determining a first image area with position change in the at least one original image data within a preset time, and setting the first image area as a first Video image; determining a second image area without position change within a preset time in the at least one original image data, and setting the second image area as a first OSD image.
In one possible example, when the image information includes image position information, in the aspect of determining the first Video image and the first OSD image of the at least one original image data according to the image information, the instructions in the program are specifically configured to: determining at least one original image data display area with position change in the at least one original image data within Video playing time, and setting the at least one original image data display area as a first Video image; determining an interface static display area without position change in the at least one original image data within video playing time, and setting the interface static display area as a first OSD image.
In one possible example, after determining the at least one first Video image data and the at least one first OSD image data according to the data parameter, the program further includes instructions for: detecting window information of the display interface; and if the fact that at least one window respectively displays at least one piece of original image data is detected, determining a first Video image and a first OSD image of each window according to at least one original image data image input by each window.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 7 is a block diagram showing functional units of an image processing apparatus 700 according to an embodiment of the present application. The image processing apparatus 700 is applied to the electronic device shown in fig. 1A, where the electronic device includes an AP terminal, a display screen, and an image processing chip, where the image processing chip includes a first mobile industry processor interface module, a second mobile industry processor interface module, a video fusion module, and an output module, the first mobile industry processor interface module and the second mobile industry processor interface module are respectively connected to the video fusion module, the video fusion module is connected to the output module, and the output module is connected to the display screen; comprising a processing unit 701 and a communication unit 702, wherein,
the processing unit 701 is configured to identify a data parameter of at least one original image data when the input of the at least one original image data is detected, where the data parameter includes a data type included in the original image data and a number of each data type, and the data type includes first Video image data and first OSD image data; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting the at least one first OSD image into the first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting the at least one first Video image into the second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing the at least one second Video image and the at least one second OSD image through the Video fusion module to obtain at least one target Video; and outputting the at least one target video to the display screen by the output module.
The image processing apparatus 700 may further include a storage unit 703 for storing program codes and data of the electronic device. The processing unit 701 may be a processor, the communication unit 702 may be a touch display screen or a transceiver, and the storage unit 703 may be a memory.
It can be seen that, in the embodiment of the present application, when the electronic device detects that at least one original image data is input, a data parameter of the at least one original image data is identified; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting at least one first OSD image into a first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting at least one first Video image into a second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing at least one second Video image and at least one second OSD image through a Video fusion module to obtain at least one target Video; and outputting the at least one target video to a display screen by an output module. Therefore, the target Video is obtained by respectively processing the first OSD image and the first Video image through the two-way mobile industrial processor interface, the problems of deformation and tearing of an on-screen display area caused by frame insertion of dynamic display contents of the Video are avoided, the frame insertion operation contents are reduced, and the power consumption is reduced.
In one possible example, in the aspect of determining the first Video image and the first OSD image according to the input at least one original image data image, the processing unit 701 is specifically configured to: identifying parameter information of the at least one original image data, the parameter information including hierarchical state information and display area information; and determining the first Video image and the first OSD image according to the parameter information.
In one possible example, the image processing chip further comprises an image area identification module, the image area identification module connecting the first mobile industry processor interface module and the second mobile industry processor interface module; when the parameter information is hierarchical state information, in the aspect of determining the first Video image and the first OSD image according to the parameter information, the processing unit 701 is specifically configured to: identifying hierarchical state information of an image of the at least one original image data; when the layering state information is layering, acquiring a first Video image and a first OSD image of the at least one original image data; and when the layering state information is no layering, identifying a first Video image and a first OSD image of the at least one original image data through the image area identification module.
In a possible example, when the hierarchical state information is no hierarchical layer, in terms of identifying the first Video image and the first OSD image of the at least one original image data by the image region identification module, the processing unit 701 is specifically configured to: when the input of the at least one original image data is detected, acquiring image information through the image area identification module, wherein the image information comprises image area information and image position information; and determining a first Video image and a first OSD image of the at least one original image data according to the image information.
In a possible example, when the image information includes image region information, in the aspect of determining the first Video image and the first OSD image of the at least one original image data according to the image information, the processing unit 701 is specifically configured to: determining a first image area with position change in the at least one original image data within a preset time, and setting the first image area as a first Video image; determining a second image area without position change within a preset time in the at least one original image data, and setting the second image area as a first OSD image.
In a possible example, when the image information includes image position information, in terms of determining the first Video image and the first OSD image of the at least one original image data according to the image information, the processing unit 701 is specifically configured to: determining at least one original image data display area with position change in the at least one original image data within Video playing time, and setting the at least one original image data display area as a first Video image; determining an interface static display area without position change in the at least one original image data within video playing time, and setting the interface static display area as a first OSD image.
In a possible example, after determining the at least one first Video image data and the at least one first OSD image data according to the data parameter, the processing unit 701 is further configured to: detecting window information of the display interface; and if the fact that at least one window respectively displays at least one piece of original image data is detected, determining a first Video image and a first OSD image of each window according to at least one original image data image input by each window.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method is characterized in that the method is applied to electronic equipment, the electronic equipment comprises an AP end, a display screen and an image processing chip, the image processing chip comprises at least one first mobile industry processor interface module, at least one second mobile industry processor interface module, a video fusion module and an output module, the AP end is respectively connected with the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module, the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module are respectively connected with the video fusion module, the video fusion module is connected with the output module, and the output module is connected with the display screen; the method comprises the following steps:
when at least one original image data input is detected, identifying data parameters of the at least one original image data, wherein the data parameters comprise data types and the number of each data type, and the data types comprise first Video image data and first OSD image data;
determining at least one first Video image data and at least one first OSD image data according to the data parameter;
inputting the at least one first OSD image into the first mobile industry processor interface module for processing to obtain at least one second OSD image;
inputting the at least one first Video image into the second mobile industry processor interface module for processing to obtain at least one second Video image;
synthesizing the at least one second Video image and the at least one second OSD image through the Video fusion module to obtain at least one target Video;
outputting the at least one target video to the display screen by the output module.
2. The method of claim 1, wherein determining at least one first Video image data and at least one first OSD image data according to the data parameter comprises:
determining layering state information and display area information according to the data parameters;
and determining the first Video image and the first OSD image according to the layering state information and the display area information.
3. The method of claim 2, wherein the image processing chip further comprises an image area identification module, the image area identification module connecting the first mobile industry processor interface module and the second mobile industry processor interface module; determining the first Video image and the first OSD image according to the hierarchical state information and the display area information, including:
when the at least one original image data is determined to be layered according to the layering state information, directly acquiring a first Video image and a first OSD image of the at least one original image data;
and when the at least one original image data is determined to be non-layered according to the layering state information, detecting and acquiring a first Video image and a first OSD image of the at least one original image data through the image area identification module.
4. The method of claim 3, wherein the detecting and obtaining the first Video image and the first OSD image of the at least one original image data by the image region identification module when the at least one original image data is determined to be non-layered according to the layering state information comprises:
when the input of the at least one original image data is detected, acquiring image information through the image area identification module, wherein the image information comprises image area information and image position information;
and determining a first Video image and a first OSD image of the at least one original image data according to the image information.
5. The method of claim 4, wherein when the image information includes image region information, the determining the first Video image and the first OSD image of the at least one original image data according to the image information comprises:
determining a first image area with position change in the at least one original image data within a preset time, and setting the first image area as a first Video image;
determining a second image area without position change within a preset time in the at least one original image data, and setting the second image area as a first OSD image.
6. The method of claim 4, wherein when the image information includes image position information, the determining the first Video image and the first OSD image of the at least one original image data according to the image information comprises:
determining at least one original image data display area with position change in the at least one original image data within Video playing time, and setting the at least one original image data display area as a first Video image;
determining an interface static display area without position change in the at least one original image data within video playing time, and setting the interface static display area as a first OSD image.
7. The method according to any of claims 1-6, wherein said determining at least one first Video image data and at least one first OSD image data according to the data parameter further comprises:
detecting window information of the display interface;
and if the fact that at least one window respectively displays at least one piece of original image data is detected, determining a first Video image and a first OSD image of each window according to at least one original image data image input by each window.
8. An image processing device is applied to an electronic device, the electronic device includes an AP end, a display screen, and an image processing chip, the image processing chip includes at least one first mobile industry processor interface module, at least one second mobile industry processor interface module, a video fusion module, and an output module, the AP end is respectively connected to the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module, the at least one first mobile industry processor interface module and the at least one second mobile industry processor interface module are respectively connected to the video fusion module, the video fusion module is connected to the output module, and the output module is connected to the display screen; the apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for identifying data parameters of at least one original image data when the input of the at least one original image data is detected, wherein the data parameters comprise data types and the number of each data type included in the original image data, and the data types comprise first Video image data and first OSD image data; determining at least one first Video image data and at least one first OSD image data according to the data parameter; inputting the at least one first OSD image into the first mobile industry processor interface module for processing to obtain at least one second OSD image; inputting the at least one first Video image into the second mobile industry processor interface module for processing to obtain at least one second Video image; synthesizing the at least one second Video image and the at least one second OSD image through the Video fusion module to obtain at least one target Video; and outputting the at least one target video to the display screen by the output module.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN202010313697.3A 2020-04-20 2020-04-20 Image processing method and related device Pending CN113542623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010313697.3A CN113542623A (en) 2020-04-20 2020-04-20 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010313697.3A CN113542623A (en) 2020-04-20 2020-04-20 Image processing method and related device

Publications (1)

Publication Number Publication Date
CN113542623A true CN113542623A (en) 2021-10-22

Family

ID=78123652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010313697.3A Pending CN113542623A (en) 2020-04-20 2020-04-20 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN113542623A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108024133A (en) * 2016-10-28 2018-05-11 深圳市中兴微电子技术有限公司 A kind of information output display method and device
CN110933496A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Image data frame insertion processing method and device, electronic equipment and storage medium
CN110933497A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Video image data frame insertion processing method and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108024133A (en) * 2016-10-28 2018-05-11 深圳市中兴微电子技术有限公司 A kind of information output display method and device
CN110933496A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Image data frame insertion processing method and device, electronic equipment and storage medium
CN110933497A (en) * 2019-12-10 2020-03-27 Oppo广东移动通信有限公司 Video image data frame insertion processing method and related equipment

Similar Documents

Publication Publication Date Title
CN109379625B (en) Video processing method, video processing device, electronic equipment and computer readable medium
CN110377264B (en) Layer synthesis method, device, electronic equipment and storage medium
EP3879839A1 (en) Video processing method and apparatus, and electronic device and computer-readable medium
CN111356026B (en) Image data processing method and related device
EP2082393B1 (en) Image processing apparatus for superimposing windows displaying video data having different frame rates
KR100855611B1 (en) Method, apparatus and system for showing and editing multiple video streams on a small screen with a minimal input device
JP4346591B2 (en) Video processing apparatus, video processing method, and program
CN106980510B (en) Window self-adaption method and device of player
US20130009997A1 (en) Pinch-to-zoom video apparatus and associated method
CN110363831B (en) Layer composition method and device, electronic equipment and storage medium
CN111064863B (en) Image data processing method and related device
CN110569013B (en) Image display method and device based on display screen
CN107870703B (en) Method, system and terminal equipment for full-screen display of picture
CN113778360B (en) Screen projection method and electronic equipment
JP2014077993A (en) Display device
TW201327466A (en) Image editing system and editing method
US20150078734A1 (en) Display apparatus and controlling method thereof
CN112905134A (en) Method and device for refreshing display and electronic equipment
WO2023125316A1 (en) Video processing method and apparatus, electronic device, and medium
CN113542623A (en) Image processing method and related device
WO2023125273A1 (en) Image display method of electronic equipment, image processing circuit and electronic equipment
US6008854A (en) Reduced video signal processing circuit
US11189254B2 (en) Video processing device, display device, video processing method, and recording medium
CN114008570A (en) Touch display device, touch response method and system thereof, and storage medium
EP4300979A1 (en) Display method, terminal, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination