WO2022037251A1 - 视频数据处理方法及装置 - Google Patents

视频数据处理方法及装置 Download PDF

Info

Publication number
WO2022037251A1
WO2022037251A1 PCT/CN2021/102574 CN2021102574W WO2022037251A1 WO 2022037251 A1 WO2022037251 A1 WO 2022037251A1 CN 2021102574 W CN2021102574 W CN 2021102574W WO 2022037251 A1 WO2022037251 A1 WO 2022037251A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame insertion
target video
image data
chip
target
Prior art date
Application number
PCT/CN2021/102574
Other languages
English (en)
French (fr)
Inventor
范泽华
郑超
陈江川
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022037251A1 publication Critical patent/WO2022037251A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a video data processing method and device.
  • video frame insertion is real-time frame insertion, that is, frame insertion is performed while the video is playing, and is synchronously transmitted to the display screen for display.
  • a frame insertion chip is usually bridged between the application processor (Application Processor, AP) and the display screen for digital signal processing (Digital Signal Process, DSP).
  • the video real-time frame insertion is only used for this playback. When the video is played next time, the frame insertion operation needs to be performed again.
  • Embodiments of the present application provide a video data processing method and apparatus.
  • an embodiment of the present application provides a video data processing method, which is applied to an electronic device, where the electronic device includes an AP, a frame insertion chip, a display screen, and a memory, and the method includes:
  • the AP sends the image data of the target video to the frame insertion chip
  • the frame insertion chip performs frame insertion processing on the received image data of the target video
  • the frame insertion chip sends the image data after frame insertion of the target video to the display screen, and sends target information to the AP, and the target information is used to realize that the image data after frame insertion of the target video is stored in the in the memory.
  • an embodiment of the present application provides a video data processing apparatus, which is applied to an electronic device including a display screen and a memory, and the apparatus includes an AP and a frame insertion chip, wherein;
  • the AP is used to send the image data of the target video to the frame insertion chip
  • the frame insertion chip is used to perform frame insertion processing on the received image data of the target video
  • the frame insertion chip is also used to send the image data after the target video frame insertion to the display screen, and to send first information to the AP, where the first information is used to implement the target video frame insertion
  • the resulting image data is stored in the memory.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, a display screen, and one or more programs, wherein the processor includes an AP and a frame insertion chip, one or more of the above A program is stored in the above-mentioned memory and configured to be executed by the above-mentioned AP and the frame insertion chip, and the above-mentioned program includes instructions for executing steps in any method of the first aspect of the embodiments of the present application.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to execute the computer program as described in the first embodiment of the present application. In one aspect some or all of the steps described in any method.
  • an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in the present application. Examples include some or all of the steps described in any method of the first aspect.
  • the computer program product may be a software installation package.
  • FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 2 is the connection schematic diagram of existing AP, frame insertion chip and display screen
  • FIG. 3 is a schematic diagram of a connection between an AP, a frame insertion chip and a display screen provided by an embodiment of the present application;
  • FIG. 4 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a video data processing method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of determining a transition frame provided by an embodiment of the present application.
  • FIG. 7 is another schematic diagram of determining a transition frame provided by an embodiment of the present application.
  • FIG. 8 is another schematic diagram of determining a transition frame provided by an embodiment of the present application.
  • Fig. 9 is a kind of example schematic diagram
  • Figure 10 is another example schematic diagram
  • FIG. 11 is a schematic structural diagram of an existing frame insertion chip
  • FIG. 12 is a schematic structural diagram of a frame insertion chip provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a video data processing apparatus provided by an embodiment of the present application.
  • the electronic device may be a portable electronic device that also includes other functions such as personal digital assistant and/or music player functions, such as a mobile phone, a tablet computer, a wearable electronic device (eg, a smart watch) with wireless communication capabilities, and the like.
  • portable electronic devices include, but are not limited to, portable electronic devices powered by IOS systems, Android systems, Microsoft systems, or other operating systems.
  • the above-mentioned portable electronic device may also be other portable electronic devices, such as a laptop computer (Laptop) or the like. It should also be understood that, in some other embodiments, the above-mentioned electronic device may not be a portable electronic device, but a desktop computer.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, compass 190, motor 191, indicator 192, camera 193, display screen 194 and user Identity module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP) ), video codec, digital signal processor (DSP), baseband processor, etc. Wherein, different processing units may be independent components, or may be integrated in one or more processors.
  • electronic device 100 may also include one or more processors 110 .
  • the processor 110 may generate an operation control signal according to the instruction operation code and the timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal) asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interface, SIM card interface and/or USB interface, etc.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the wireless communication module 160 may provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (blue tooth, BT), etc. applied on the electronic device 100 communication solutions.
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • BT blue tooth
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • electronic device 100 may include one or more display screens 194 .
  • connection relationship between the existing AP, the frame insertion chip and the display screen is shown in Figure 2.
  • the output interface of the AP is connected with the input interface of the frame insertion chip
  • the output interface of the frame insertion chip is connected with the input interface of the display screen.
  • connection relationship between the AP, the frame insertion chip and the display screen is shown in Figure 3.
  • the output interface of the AP is connected to the input interface of the frame insertion chip
  • the first output interface of the frame insertion chip is connected to the AP.
  • the second output interface of the frame insertion chip is connected with the input interface of the display screen.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the electronic device 100 may include one or more cameras 193 .
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • an external memory card such as a Micro SD card
  • Internal memory 121 may be used to store one or more computer programs including instructions.
  • the processor 110 may execute the above-mentioned instructions stored in the internal memory 121, thereby causing the electronic device 101 to execute the frame insertion processing method provided in some embodiments of the present application, various applications and data processing.
  • the internal memory 121 may include a high-speed random access memory, and may also include a nonvolatile memory.
  • the processor 110 may cause the electronic device 101 to execute the instructions provided in the embodiments of the present application by executing the instructions stored in the internal memory 121 and/or the instructions stored in the memory provided in the processor 110 . Interpolated frame processing methods, and other applications and data processing.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the sensor module 180 may include at least one sensor, as shown in FIG. 1 .
  • FIG. 4 shows a software structural block diagram of the electronic device 100 .
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application layer may include at least one application.
  • the application framework layer provides an application programming interface (API) and a programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the window manager included in the application framework layer is used to manage window programs.
  • the content providers included in the application framework layer are used to store and retrieve data and make these data accessible to applications.
  • the view system included in the application framework layer includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the phone manager included in the application framework layer is used to provide the communication function of the electronic device 100 .
  • the resource manager included in the application framework layer provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager included in the application framework layer enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a short stay without user interaction.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of a graphic or scroll bar text.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library media library
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • FIG. 5 is a schematic flowchart of a video data processing method provided by an embodiment of the present application, which is applied to the above electronic device. As shown in the figure, the video data processing method includes the following operations.
  • Step 501 The AP sends the image data of the target video to the frame insertion chip.
  • Step 502 The frame insertion chip performs frame insertion processing on the received image data of the target video.
  • Step 503 The frame insertion chip sends the image data after frame insertion of the target video to the display screen, and sends target information to the AP, and the target information is used to realize the image after frame insertion of the target video. Data is stored in the memory.
  • the method further includes: after receiving the image data after frame insertion of the target video, performing display processing on the display screen.
  • the target video may be a video downloaded by the electronic device from a video source supplier, or may be a video captured by the electronic device through a camera module, or the like.
  • the frame insertion chip performs frame insertion processing, including: the frame insertion chip performs frame insertion processing based on a video frame insertion algorithm.
  • the video frame insertion algorithm is: determining N first feature points based on the current frame image and the next frame image, where the first feature points are the next frame image and the current frame image Compared with the pixels that have changed, the N is a positive integer;
  • the first transition frame image is inserted between the current frame image and the next frame image.
  • determining the motion vector of each of the first feature points to obtain N first motion vectors including:
  • the pixel information of the first feature point includes position information and brightness information of the first feature point in the image.
  • next frame of image is the next frame of image of the current frame of image.
  • determining the first transition frame image based on the current frame image, the next frame image and the N first motion vectors comprising:
  • the first transition frame image is generated based on pixel information of the first transition frame image.
  • the current frame image and the next frame image of the operation interface are shown in Figure 6, and there are 4 first feature points (2 ⁇ 2 gray squares in the figure), if these 4 first feature points are in The position information of the current frame image is (a11, b11), (a12, b12), (a13, b13) and (a14, b14) respectively, and the position information of these four first feature points in the next frame image is (a21) , b21), (a22, b22), (a23, b23) and (a24, b24), then the motion vectors of the four first feature points are (a21-a11, b21-b11), (a22-a12, b22-b12), (a23-a13, b23-b13) and (a24-a14, b24-b14), and finally the first transition frame image can be generated through the current frame image, the next frame image and the determined motion vector, such as shown in Figure 6.
  • the video frame insertion algorithm is: determining M second feature points based on the current frame image and the previous K frame images of the current frame image, and the second feature points are the current frame image Compared with the previous K frame images, the pixel points that have changed, the M and the K are both positive integers;
  • the second transition frame image is inserted between the current frame image and the next frame image.
  • the previous frame image is the previous frame image of the current frame image.
  • the determining the motion vector of each of the second feature points to obtain M second motion vectors including: based on each of the second feature points in the current frame The pixel information of the image and the pixel information of each of the second feature points in the previous frame of image are used to determine the motion vector of each of the second feature points, and M second motion vectors are obtained.
  • the pixel information of the first feature point includes position information and brightness information of the first feature point in the image.
  • the determining the motion vector of each of the second feature points to obtain M second motion vectors including: based on each of the second feature points in the current frame
  • the pixel information of the image and the pixel information of the target image determine the motion vector of each of the second feature points, to obtain M second motion vectors
  • the current frame image and the previous frame image of the operation interface are shown in Figure 7, and there are 4 second feature points (2 ⁇ 2 gray squares in the figure).
  • the position information of the two feature points in the next frame image are (c11, d11), (c12, d12), (c13, d13) and (c14, d14), respectively.
  • the position information of these four second feature points in the current frame image are (c21, d21), (c22, d22), (c23, d23) and (c24, d24) respectively, then the motion vectors of these four second feature points are (c21-c11, d21-d11), ( c22-c12, d22-d12), (c23-c13, d23-d13) and (c24-c14, d24-d14), and finally the second transition can be generated through the current frame image, the previous frame image and the determined motion vector frame image, as shown in Figure 7.
  • the current frame image and the target image of the operation interface are shown in Figure 8, and there are 4 second feature points ( 2 ⁇ 2 gray squares in the figure), if the position information of these 4 second feature points in the target image is (e11, f11), (e12, f12), (e13, f13) and (e14, f14) , the position information of these four second feature points in the current frame image is (e21, f21), (e22, f22), (e23, f23) and (e24, f24), then the four second feature points are The motion vectors are [(e21-e11)/2, (f21-f11)/2], [(e22-e12)/2, (f22-f12)/2], [(e23-e13)/2, ( f23-f13)/2], [(e23-e13)/2, (f23-f13)/2] and [(e21-e11)/2, (f23-f13)/2] and [(e21-e11)/2
  • the AP first sends the image data of the target video to the frame insertion chip, the frame insertion chip performs frame insertion processing on the received image data of the target video, and then the frame insertion chip sends the target video to the display screen.
  • the image data after the video frame is inserted, so that the display screen can be displayed, and the frame insertion chip sends the target information to the AP.
  • the image data of the target video after frame insertion is read, so that the video can be reused after frame insertion, and finally the purpose of reducing power consumption is achieved.
  • the target information is identification information
  • the identification information is used to indicate that frame insertion processing has been performed on the target video.
  • the frame insertion chip sends the target information to the AP.
  • Methods also include:
  • the AP acquires the frame-inserted image data of the target video from the service device, and stores the frame-inserted image data of the target video in the memory.
  • the method before the AP acquires the image data after frame insertion of the target video from the service device, the method further includes:
  • the AP sends the target video to the service device during a target time period, so that the service device performs frame insertion processing on the target video, and the target time period is a non-working time period or a very-used time period of the electronic device .
  • the service device is integrated into the electronic device, the service device includes an algorithm module and a storage module, and the algorithm module stores the above-mentioned video frame insertion processing algorithm.
  • the service device is integrated in the processor, or the service device is a module independent of the processor.
  • the AP sends the target video to the service device; after the service device receives the target video, the service device performs frame insertion processing on the target video based on the above-mentioned video frame insertion processing algorithm to obtain an image after frame insertion of the target video.
  • the service device stores the image data after the target video frame insertion in the storage module of the service device; the AP obtains the image data after the target video frame insertion from the storage module of the service device, and then obtains the target video frame insertion after the acquisition.
  • the image data is stored in the memory.
  • the frame insertion chip feeds back target information to the AP, and then after receiving the target information, the AP sends the target video to the service device during the idle period, so as to
  • the service device is made to perform frame insertion processing on the target video during the idle period, and finally the AP obtains the image data after frame insertion of the target video from the service device and stores it in the memory, so that the video can be reused after frame insertion, and finally achieves a reduction in power consumption.
  • the service device performs frame insertion processing during idle periods to avoid adding additional computational burden to the electronic equipment during non-idle periods, thereby avoiding the problem of jamming of the electronic equipment.
  • the service device is a cloud server
  • the cloud server includes a cloud algorithm and a cloud storage
  • the cloud algorithm includes the above-mentioned video frame insertion processing algorithm
  • the cloud storage is used to store the target video insertion. video data after the frame.
  • the AP sends the target video to the cloud server; the cloud server performs frame insertion processing on the target video based on the above-mentioned video frame insertion processing algorithm, and obtains the image data after frame insertion of the target video; the cloud server inserts the target video into the frame.
  • the framed image data is stored in the cloud storage of the cloud server; the AP obtains the image data after frame insertion of the target video from the cloud storage of the cloud server, and then stores the obtained image data after frame insertion of the target video in the memory.
  • the frame insertion chip feeds back identification information to the AP, and then after receiving the identification information, the AP sends the target video to the cloud server during the idle period, so as to obtain the identification information. It enables the cloud server to perform frame insertion processing on the target video during the idle period. Finally, the AP obtains the image data of the target video after frame insertion from the cloud server and stores it in the memory, so that the video can be reused after frame insertion, and finally achieves a reduction in power consumption. In addition, the cloud server performs frame insertion processing during idle periods to avoid adding additional computational burden to the cloud server during non-idle periods.
  • the electronic device further includes a memory
  • the target information is image data after frame insertion of the target video
  • after the frame insertion chip sends the target information to the AP
  • the method It also includes: the AP stores the image data after frame insertion of the target video in the memory.
  • the frame insertion chip sends the image data after frame insertion of the marked video to the display screen, and also sends the image data after frame insertion of the target video to the AP, so that the AP inserts the frame of the target video.
  • the resulting image data is stored in the memory, so that the video can be reused after frame insertion, and finally the purpose of reducing power consumption is achieved.
  • the method further includes:
  • the AP When detecting the playback instruction of the target video, the AP obtains the image data after frame insertion of the target video from the memory, and sends the frame insertion image data of the target video to the frame insertion chip;
  • the frame insertion chip sends the received image data of the target video frame insertion to the display screen;
  • the display screen After receiving the image data after frame insertion of the target video, the display screen performs display processing.
  • the frame insertion chip when the electronic device detects the playback instruction of the target video again, the frame insertion chip does not need to perform frame insertion processing, and directly transmits the image data after frame insertion of the target video to the display screen for display. Yes, the operation of the frame insertion chip is reduced, and the power consumption of the electronic device is further reduced.
  • the structure of the current frame insertion chip is shown in Figure 11.
  • the frame insertion chip includes an input control unit, a compression processing unit, and an output control unit.
  • the input control unit is connected to the compression processing unit, and the compression processing unit is connected to the output control unit.
  • the frame insertion chip includes an input control unit, a compression processing unit, an output control unit, and a storage control unit;
  • the input interface of the frame insertion chip is connected to the input control unit, the input control unit is connected to the compression processing unit, and the compression processing unit is respectively connected to the output control unit and the storage control unit and the output control unit is respectively connected with the first output interface of the frame insertion chip, the second output interface of the frame insertion chip and the storage control unit, and the storage control unit is connected with the memory.
  • the storage control unit is also connected to the storage.
  • the electronic device further includes a shooting module, and the target video is a video shot in real time by the shooting module.
  • a storage control unit is added to the frame insertion chip, so that the original frame insertion chip can only store 1 to 2 frames of content to expand to more content, and an additional line from the compression processing unit to the storage is added.
  • the channel of the control unit, and a channel from the output control unit to the storage control unit is added, which realizes not only communication with the compression processing unit, but also cooperative processing by the output control unit, which improves the function of the frame insertion chip.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the electronic device can be divided into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 13 shows a schematic diagram of a video data processing apparatus.
  • the video data processing apparatus is applied to an electronic device including a display screen and a memory.
  • the processing device may include: AP1301 and frame insertion chip 1302 .
  • the AP1301 may be used to support the electronic device to perform the above-mentioned step 501, etc., and/or be used for other processes of the technology described herein.
  • the frame insertion chip 1302 may be used to support the electronic device to perform the above-described steps 502, 503, etc., and/or other processes for the techniques described herein.
  • the electronic device provided in this embodiment is used to execute the above-mentioned video data processing method, and thus can achieve the same effect as the above-mentioned implementation method.
  • the electronic device may include a processing module, a memory module and a communication module.
  • the processing module may be used to control and manage the actions of the electronic device, for example, may be used to support the electronic device to perform the steps performed by the AP 1301 and the frame insertion chip 1302 described above.
  • the storage module may be used to support the electronic device to execute stored program codes and data, and the like.
  • the communication module can be used to support the communication between the electronic device and other devices.
  • the processing module may be a processor or a controller. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • the processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and the like.
  • the storage module may be a memory.
  • the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, and a Wi-Fi chip.
  • the electronic device involved in this embodiment may be a device having the structure shown in FIG. 1 .
  • This embodiment also provides a computer storage medium, where computer instructions are stored in the computer storage medium, and when the computer instructions are executed on the electronic device, the electronic device executes the above-mentioned relevant method steps to realize the video data processing method in the above-mentioned embodiment. .
  • This embodiment also provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the above-mentioned relevant steps, so as to realize the video data processing method in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used for storing computer execution instructions, and when the apparatus is running, The processor can execute the computer-executable instructions stored in the memory, so that the chip executes the position determination method in each of the above method embodiments.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference can be made to the corresponding provided above. The beneficial effects in the method will not be repeated here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or May be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium.
  • a readable storage medium including several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Television Systems (AREA)

Abstract

一种视频数据处理方法及装置,应用于包括AP、插帧芯片、显示屏和存储器的电子设备,方法包括:AP向插帧芯片发送目标视频的图像数据;插帧芯片对接收到的所述目标视频的图像数据进行插帧处理;插帧芯片向显示屏发送所述目标视频插帧后的图像数据,以及向AP发送目标信息,所述目标信息用于实现所述目标视频插帧后的图像数据存储至所述存储器中。采用本方法可实现插帧后视频可再次使用,最终达到降低功耗的目的。

Description

视频数据处理方法及装置 技术领域
本申请涉及电子技术领域,尤其涉及一种视频数据处理方法及装置。
背景技术
目前,视频插帧均是实时插帧,也就是在视频播放的同时进行插帧,并同步传送给显示屏进行显示。为了实现视频实时插帧,通常在应用处理器(Application Processor,AP)和显示屏之间桥接一个插帧芯片做数字信号处理(Digital Signal Process,DSP)。视频实时插帧仅供本次播放使用,当下次在播放该视频时,还需要再次进行插帧操作。
发明内容
本申请实施例提供一种视频数据处理方法及装置。
第一方面,本申请实施例提供一种视频数据处理方法,应用于电子设备,所述电子设备包括AP、插帧芯片、显示屏和存储器,所述方法包括:
所述AP向所述插帧芯片发送目标视频的图像数据;
所述插帧芯片对接收到的所述目标视频的图像数据进行插帧处理;
所述插帧芯片向所述显示屏发送所述目标视频插帧后的图像数据,以及向所述AP发送目标信息,所述目标信息用于实现所述目标视频插帧后的图像数据存储至所述存储器中。
第二方面,本申请实施例提供一种视频数据处理装置,应用于包括显示屏和存储器的电子设备,所述装置包括AP和插帧芯片,其中;
所述AP,用于向所述插帧芯片发送目标视频的图像数据;
所述插帧芯片,用于对接收到的所述目标视频的图像数据进行插帧处理;
所述插帧芯片,还用于向所述显示屏发送所述目标视频插帧后的图像数据,以及向所述AP发送第一信息,所述第一信息用于实现所述目标视频插帧后的图像数据存储至所述存储器中。
第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口、显示屏以及一个或多个程序,其中,所述处理器包括AP和插帧芯片,上述一个或多个程序被存储在上述存储器中,并且被配置由上述AP和插帧芯片执行,上述程序包括用于执行本申请实施例第一方面任一方法中的步骤的指令。
第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。
第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种电子设备的结构示意图;
图2是现有AP、插帧芯片与显示屏的连接示意图;
图3是本申请实施例提供的一种AP、插帧芯片与显示屏的连接示意图;
图4是本申请实施例提供的一种电子设备的软件结构示意图;
图5是本申请实施例提供的一种视频数据处理方法的流程示意图;
图6是本申请实施例提供的一种确定过渡帧的示意图;
图7是本申请实施例提供的另一种确定过渡帧的示意图;
图8是本申请实施例提供的另一种确定过渡帧的示意图;
图9是一种举例示意图;
图10是另一种举例示意图;
图11是现有插帧芯片的结构示意图;
图12是本申请实施例提供的一种插帧芯片的结构示意图;
图13是本申请实施例提供的一种视频数据处理装置的结构示意图。
具体实施方式
下面将结合附图,对本申请实施例中的技术方案进行描述。
电子设备可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式电子设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴电子设备(如智能手表)等。便携式电子设备的示例性实施例包括但不限于搭载IOS系统、Android系统、Microsoft系统或者其它操作系统的便携式电子设备。上述便携式电子设备也可以是其它便携式电子设备,诸如膝上型计算机(Laptop)等。还应当理解的是,在其他一些实施例中,上述电子设备也可以不是便携式电子设备,而是台式计算机。
示例性的,图1示出了电子设备100的结构示意图。电子设备100可以包括处理器110、外部存储器接口120、内部存储器121、通用串行总线(universal serial bus,USB)接口130、充电管理模块140、电源管理模块141、电池142、天线1、天线2、移动通信模块150、无线通信模块160、音频模块170、扬声器170A、受话器170B、麦克风170C、耳机接口170D、传感器模块180、指南针190、马达191、指示器192、摄像头193、显示屏194以及用户标识模块(subscriber identification module,SIM)卡接口195等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器等。其中,不同的处理单元可以是独立的部件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备100也可以包括一个或多个处理器110。其中,处理器110可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。在其他一些实施例中,处理器110中还可以设置存储器,用于存储指令和数据。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路间(inter-integrated circuit,I2C)接口、集成电路间音频(inter-integrated circuit sound,I2S)接口、脉冲编码调制(pulse code modulation,PCM)接口、通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口、移动产业处理器接口(mobile industry processor interface,MIPI)、用输入输出(general-purpose input/output,GPIO)接口、SIM卡接口和/或USB接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。
电子设备100的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等实现。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络)、蓝牙(blue tooth,BT),等无线通信的解决方案。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。显示屏194用于显示图像、视频等。显示屏194包括显示面板。在一些实施例中,电子设备100可以包括1个或多个显示屏194。
现有AP、插帧芯片和显示屏的连接关系如图2所示,如图所示,AP的输出接口与插帧芯片的输入接口连接,插帧芯片的输出接口与显示屏的输入接口连接。
在本申请中,AP、插帧芯片和显示屏的连接关系如图3所示,如图所示,AP的输出接口与插帧芯片的输入接口连接,插帧芯片的第一输出接口与AP的输入接口连接,插帧芯片的第二输出接口与显示屏的输入接口连接。
可见,在本申请中,添加了一条从插帧芯片到AP的回读通路,以使得插帧芯片能够给AP反馈用于实现电子设备后续回读视频插帧后的图像数据,以达到已插帧处理过的视频无需再次进行插帧处理的目的,进而降低了功耗。
电子设备100可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头193反馈的数据。在一些实施例中,ISP可以设置在摄像头193中。摄像头193用于捕获静态图像或视频。在一些实施例中,电子设备100可以包括1个或多个摄像头193。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备101执行本申请一些实施例中所提供的插帧处理方法,以及各种应用以及数据处理等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器。在一些实施例中,处理器110可以通过运行存储在内部存储器121的指令,和/或存储在设置于处理器110中的存储器的指令,来使得电子设备101执行本申请实施例中所提供的插帧处理方法,以及其他应用及数据处理。
电子设备100可以通过音频模块170、扬声器170A、受话器170B、麦克风170C、耳机接口170D、以及应用处理器等实现音频功能。例如音乐播放、录音等。
传感器模块180可以包括至少一个传感器,具体如图1所示。
示例性的,图4示出了电子设备100的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。应用程序层可以包括一系列应用程序包。
应用程序层可以包括至少一个应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming  interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
应用程序框架层包括的窗口管理器用于管理窗口程序。应用程序框架层包括的内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。应用程序框架层包括的视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。应用程序框架层包括的电话管理器用于提供电子设备100的通信功能。应用程序框架层包括的资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。应用程序框架层包括的通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
目前,视频实时插帧仅供本次播放使用,当下次在播放该视频时,还需要再次进行插帧操作,限制了插帧后视频的再次使用。
请参阅图5,图5是本申请实施例提供了一种视频数据处理方法的流程示意图,应用于上述电子设备,如图所示,本视频数据处理方法包括以下操作。
步骤501:AP向插帧芯片发送目标视频的图像数据。
步骤502:所述插帧芯片对接收到的所述目标视频的图像数据进行插帧处理。
步骤503:所述插帧芯片向所述显示屏发送所述目标视频插帧后的图像数据,以及给所述AP发送目标信息,所述目标信息用于实现所述目标视频插帧后的图像数据存储至所述存储器中。
可选地,步骤503之后,所述方法还包括:在接收到所述目标视频插帧后的图像数据后,所述显示屏进行显示处理。
其中,目标视频可以是电子设备从视频源供应商下载的视频,也可以是电子设备通过摄像模块拍摄得到的视频等等。
可选地,插帧芯片进行插帧处理,包括:插帧芯片基于视频插帧算法进行插帧处理。
在一个实施例中,所述视频插帧算法为:基于当前帧图像和下一帧图像确定N个第一特征点,所述第一特征点为所述下一帧图像与所述当前帧图像相比出现变化的像素点,所述N为正整数;
确定每个所述第一特征点的运动矢量,得到N个第一运动矢量;
基于所述当前帧图像、所述下一帧图像和所述N个第一运动矢量确定第一过渡帧图像;
在所述当前帧图像和所述下一帧图像之间插入所述第一过渡帧图像。
可选地,所述确定每个所述第一特征点的运动矢量,得到N个第一运动矢量,包括:
基于每个所述第一特征点在所述当前帧图像的像素信息和每个所述第一特征点在所述下一帧图像的像素信息,确定每个所述第一特征点的运动矢量,得到N个第一运动矢量。
其中,第一特征点的像素信息包括第一特征点在图像中的位置信息和亮度信息。
其中,所述下一帧图像为所述当前帧图像的下一帧图像。
可选地,所述基于所述当前帧图像、所述下一帧图像和所述N个第一运动矢量确定第 一过渡帧图像,包括:
基于所述当前帧图像、所述下一帧图像和所述N个第一运动矢量确定所述第一过渡帧图像的像素信息;
基于所述第一过渡帧图像的像素信息生成所述第一过渡帧图像。
举例来说,假设操作界面的当前帧图像和下一帧图像如图6所示,有4个第一特征点(图中的2×2的灰色方块),假如这4个第一特征点在当前帧图像的位置信息分别是(a11,b11)、(a12,b12)、(a13,b13)和(a14,b14),这4个第一特征点在下一帧图像的位置信息分别是(a21,b21)、(a22,b22)、(a23,b23)和(a24,b24),那么这4个第一特征点的运动矢量分别是(a21-a11,b21-b11),(a22-a12,b22-b12),(a23-a13,b23-b13)和(a24-a14,b24-b14),最后通过当前帧图像、下一帧图像和确定的运动矢量即可生成第一过渡帧图像,如图6所示。
在另一个实施例中,所述视频插帧算法为:基于当前帧图像和所述当前帧图像的前K帧图像确定M个第二特征点,所述第二特征点为所述当前帧图像与所述前K帧图像相比出现变化的像素点,所述M和所述K均为正整数;
确定每个所述第二特征点的运动矢量,得到M个第二运动矢量;
基于所述当前帧图像和所述M个第二运动矢量确定第二过渡帧图像;
在所述当前帧图像和下一帧图像之间插入所述第二过渡帧图像。
其中,所述前一帧图像为所述当前帧图像的前一帧图像。
可选地,若所述M=1,所述确定每个所述第二特征点的运动矢量,得到M个第二运动矢量,包括:基于每个所述第二特征点在所述当前帧图像的像素信息和每个所述第二特征点在所述前一帧图像的像素信息,确定每个所述第二特征点的运动矢量,得到M个第二运动矢量。
其中,第一特征点的像素信息包括第一特征点在图像中的位置信息和亮度信息。
可选地,若所述M=2,所述确定每个所述第二特征点的运动矢量,得到M个第二运动矢量,包括:基于每个所述第二特征点在所述当前帧图像的像素信息和在目标图像的像素信息确定每个所述第二特征点的运动矢量,得到M个第二运动矢量,所述前K帧图像包括所述目标图像,所述目标图像与所述当前帧图像间隔W帧图像,所述W=[(K+2)/2]-2。
举例来说,假设K=1,操作界面的当前帧图像和前一帧图像如图7所示,有4个第二特征点(图中的2×2的灰色方块),假如这4个第二特征点在下一帧图像的位置信息分别是(c11,d11)、(c12,d12)、(c13,d13)和(c14,d14),这4个第二特征点在当前帧图像的位置信息分别是(c21,d21)、(c22,d22)、(c23,d23)和(c24,d24),那么这4个第二特征点的运动矢量分别是(c21-c11,d21-d11),(c22-c12,d22-d12),(c23-c13,d23-d13)和(c24-c14,d24-d14),最后通过当前帧图像、前一帧图像和确定的运动矢量即可生成第二过渡帧图像,如图7所示。
又举例来说,假设K=4,操作界面的当前帧图像和目标图像(在当前帧图像之前,且与当前帧图像间隔1帧图像)如图8所示,有4个第二特征点(图中的2×2的灰色方块),假如这4个第二特征点在目标图像的位置信息分别是(e11,f11)、(e12,f12)、(e13,f13)和(e14,f14),这4个第二特征点在当前帧图像的位置信息分别是(e21,f21)、(e22,f22)、(e23,f23)和(e24,f24),那么这4个第二特征点的运动矢量分别是[(e21-e11)/2,(f21-f11)/2],[(e22-e12)/2,(f22-f12)/2],[(e23-e13)/2,(f23-f13)/2],[(e23-e13)/2,(f23-f13)/2]和[(e24-e14)/2,(f24-f14)/2],最后通过当前帧图像、前一帧图像和确定的运动矢量即可生成第二过渡帧图像,如图8所示。
可以看出,在本申请实施例中,AP先向插帧芯片发送目标视频的图像数据,插帧芯片对接收到的目标视频的图像数据进行插帧处理,然后插帧芯片向显示屏发送目标视频插帧 后的图像数据,以使得显示屏进行显示处理,同时插帧芯片向AP发送目标信息,该目标信息用于实现目标视频插帧后的图像数据存储至存储器,以实现电子设备后续回读目标视频插帧后的图像数据,进而实现了插帧后视频可再次使用,最终达到降低功耗的目的。
在本申请的一实现方式中,所述目标信息为标识信息,所述标识信息用于指示所述目标视频已进行插帧处理,所述插帧芯片向所述AP发送目标信息之后,所述方法还包括:
所述AP从服务装置获取所述目标视频插帧后的图像数据,以及将所述目标视频插帧后的图像数据存储于所述存储器中。
可选地,所述AP从服务装置获取所述目标视频插帧后的图像数据之前,所述方法还包括:
所述AP在目标时段向所述服务装置发送所述目标视频,以使得所述服务装置对所述目标视频进行插帧处理,所述目标时段为非工作时段或所述电子设备的非常用时段。
在一实施例中,所述服务装置集成于所述电子设备中,所述服务装置包括算法模块和存储模块,所述算法模块中存储有上述视频插帧处理算法。
其中,该服务装置集成在处理器中,或者该服务装置为独立于处理器的模块。
具体地,如图9所示,AP给服务装置发送目标视频;服务装置接收到目标视频之后,服务装置基于上述视频插帧处理算法对目标视频进行插帧处理,得到目标视频插帧后的图像数据;服务装置将目标视频插帧后的图像数据存储在服务装置的存储模块中;AP从服务装置的存储模块中获取目标视频插帧后的图像数据,然后将获取所述目标视频插帧后的图像数据存储于存储器中。
可以看出,在本申请实施例中,在对目标视频进行插帧处理后,插帧芯片给AP反馈目标信息,然后在接收到目标信息之后,AP在空闲时段向服务装置发送目标视频,以使得服务装置在空闲时段对目标视频进行插帧处理,最后AP再从服务装置获取目标视频插帧后的图像数据,并存储在存储器中,实现了插帧后视频可再次使用,最终达到降低功耗的目的,另外在空闲时段服务装置再进行插帧处理,避免在非空闲时段给电子设备增加额外的计算负担,进而避免电子设备出现卡顿的问题。
在另一实施例中,所述服务装置为云端服务器,所述云端服务器包括云端算法和云端存储,所述云端算法包括上述视频插帧处理算法,所述云端存储用于存储所述目标视频插帧后的视频数据。
具体地,如图10所示,AP向云端服务器发送目标视频;云端服务器基于上述视频插帧处理算法对目标视频进行插帧处理,得到目标视频插帧后的图像数据;云端服务器将目标视频插帧后的图像数据存储在云端服务器的云存储中;AP从云端服务器的云存储中获取目标视频插帧后的图像数据,然后将获取所述目标视频插帧后的图像数据存储于存储器中。
可以看出,在本申请实施例中,在对目标视频进行插帧处理后,插帧芯片给AP反馈标识信息,然后在接收到标识信息之后,AP在空闲时段向云端服务器发送目标视频,以使得云端服务器在空闲时段对目标视频进行插帧处理,最后AP再从云端服务器获取目标视频插帧后的图像数据,并存储在存储器中,实现了插帧后视频可再次使用,最终达到降低功耗的目的,另外在空闲时段云端服务器再进行插帧处理,避免在非空闲时段给云端服务器增加额外的计算负担。
在本申请的一实现方式中,所述电子设备还包括存储器,所述目标信息为所述目标视频插帧后的图像数据,所述插帧芯片向所述AP发送目标信息之后,所述方法还包括:所述AP将所述目标视频插帧后的图像数据存储于所述存储器中。
可以看出,在本申请实施例中,插帧芯片在给显示屏发送标视频插帧后的图像数据,同时还给AP发送目标视频插帧后的图像数据,以使得AP将目标视频插帧后的图像数据存储在存储器中,这样实现了插帧后视频可再次使用,最终达到降低功耗的目的。
在本申请的一实现方式中,所述AP将所述目标视频插帧后的图像数据存储于所述存储器中之后,所述方法还包括:
在检测到所述目标视频的播放指令时,所述AP从所述存储器获取所述目标视频插帧后的图像数据,以及向所述插帧芯片发送所述目标视频插帧后的图像数据;
所述插帧芯片向所述显示屏发送接收到的所述目标视频插帧后的图像数据;
在接收到所述目标视频插帧后的图像数据后,所述显示屏进行显示处理。
可以看出,在本申请实施例中,电子设备再次检测到目标视频的播放指令时,插帧芯片无需在进行插帧处理,直接将目标视频插帧后的图像数据传输给显示屏进行显示即可,减少了插帧芯片的运算,进一步降低电子设备的功耗。
目前插帧芯片的结构如图11所示,插帧芯片包括输入控制单元、压缩处理单元、输出控制单元,输入控制单元与压缩处理单元连接,压缩处理单元与输出控制单元连接。
在本申请的一实现方式中,如图12所示,所述插帧芯片包括输入控制单元、压缩处理单元、输出控制单元和存储控制单元;
其中,所述插帧芯片的输入接口与所述输入控制单元连接,所述输入控制单元与所述压缩处理单元连接,所述压缩处理单元分别与所述输出控制单元和所述存储控制单元连接,所述输出控制单元分别与所述插帧芯片的第一输出接口、所述插帧芯片的第二输出接口和所述存储控制单元连接,所述存储控制单元与所述存储器连接。
进一步地,存储控制单元还于存储器连接。
进一步地,所述电子设备还包括拍摄模块,所述目标视频为所述拍摄模块实时拍摄的视频。
可以看出,在本申请实施例中,插帧芯片增加了存储控制单元,这样使得原本插帧芯片只能存储1到2帧内容扩大到能更多内容,另外增加一条从压缩处理单元到存储控制单元的通路,以及增加了一条从输出控制单元到存储控制单元的通路,实现了不仅可以与压缩处理单元进行通信,还可以输出控制单元进行协作处理,提升了插帧芯片的功能。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图13示出了视频数据处理装置的示意图,如图13所示,该视频数据处理装置应用于包括显示屏和存储器的电子设备,该视频数据处理装置可以包括:AP1301和插帧芯片1302。
其中,AP1301可以用于支持电子设备执行上述步骤501等,和/或用于本文所描述的技术的其他过程。
插帧芯片1302可以用于支持电子设备执行上述步骤502、步骤503等,和/或用于本文所描述的技术的其他过程。
需要说明的是,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例提供的电子设备,用于执行上述视频数据处理方法,因此可以达到与上述实 现方法相同的效果。
在采用集成的单元的情况下,电子设备可以包括处理模块、存储模块和通信模块。其中,处理模块可以用于对电子设备的动作进行控制管理,例如,可以用于支持电子设备执行上述AP1301和插帧芯片1302执行的步骤。存储模块可以用于支持电子设备执行存储程序代码和数据等。通信模块,可以用于支持电子设备与其他设备的通信。
其中,处理模块可以是处理器或控制器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理(digital signal processing,DSP)和微处理器的组合等等。存储模块可以是存储器。通信模块具体可以为射频电路、蓝牙芯片、Wi-Fi芯片等与其他电子设备交互的设备。
在一个实施例中,当处理模块为处理器,存储模块为存储器时,本实施例所涉及的电子设备可以为具有图1所示结构的设备。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的视频数据处理方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的视频数据处理方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的位置确定方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯 片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种视频数据处理方法,其中应用于电子设备,所述电子设备包括应用处理器AP、插帧芯片、显示屏和存储器,所述方法包括;
    所述AP向所述插帧芯片发送目标视频的图像数据;
    所述插帧芯片对接收到的所述目标视频的图像数据进行插帧处理;
    所述插帧芯片向所述显示屏发送所述目标视频插帧后的图像数据,以及向所述AP发送目标信息,所述目标信息用于实现所述目标视频插帧后的图像数据存储至所述存储器中。
  2. 根据权利要求1所述的方法,其中,所述目标信息为标识信息,所述标识信息用于指示所述目标视频已进行插帧处理,所述插帧芯片向所述AP发送目标信息之后,所述方法还包括:
    所述AP从服务装置获取所述目标视频插帧后的图像数据,以及将所述目标视频插帧后的图像数据存储于所述存储器中。
  3. 根据权利要求2所述的方法,其中,所述AP从服务装置获取所述目标视频插帧后的图像数据之前,所述方法还包括:
    所述AP在目标时段向所述服务装置发送所述目标视频,以使得所述服务装置对所述目标视频进行插帧处理,所述目标时段为非工作时段或所述电子设备的非常用时段。
  4. 根据权利要求2或3所述的方法,其中,所述服务装置集成于所述电子设备中;或者所述服务装置为云端服务器。
  5. 根据权利要求1所述的方法,其中,所述电子设备还包括存储器,所述目标信息为所述目标视频插帧后的图像数据,所述插帧芯片向所述AP发送目标信息之后,所述方法还包括:所述AP将所述目标视频插帧后的图像数据存储于所述存储器中。
  6. 根据权利要求2-5任一项所述的方法,其中,所述AP将所述目标视频插帧后的图像数据存储于所述存储器中之后,所述方法还包括:
    在检测到所述目标视频的播放指令时,所述AP从所述存储器获取所述目标视频插帧后的图像数据,以及向所述插帧芯片发送所述目标视频插帧后的图像数据;
    所述插帧芯片向所述显示屏发送接收到的所述目标视频插帧后的图像数据;
    在接收到所述目标视频插帧后的图像数据后,所述显示屏进行显示处理。
  7. 根据权利要求5所述的方法,其中,所述插帧芯片包括输入控制单元、压缩处理单元、输出控制单元和存储控制单元;
    其中,所述插帧芯片的输入接口与所述输入控制单元连接,所述输入控制单元与所述压缩处理单元连接,所述压缩处理单元分别与所述输出控制单元和所述存储控制单元连接,所述输出控制单元分别与所述插帧芯片的第一输出接口、所述插帧芯片的第二输出接口和所述存储控制单元连接,所述存储控制单元与所述存储器连接。
  8. 根据权利要求7所述的方法,其中,所述电子设备还包括拍摄模块,所述目标视频为所述拍摄模块实时拍摄的视频。
  9. 根据权利要求1-8任一项所述的方法,其中,所述AP的输出接口与所述插帧芯片的输入接口连接,所述插帧芯片的第一输出接口与所述AP的输入接口连接,所述插帧芯片的第二输出接口与所述显示屏的输入接口连接。
  10. 一种视频数据处理装置,应用于包括显示屏和存储器的电子设备,所述装置包括应用处理器AP和插帧芯片,其中;
    所述AP,用于向所述插帧芯片发送目标视频的图像数据;
    所述插帧芯片,用于对接收到的所述目标视频的图像数据进行插帧处理;
    所述插帧芯片,还用于向所述显示屏发送所述目标视频插帧后的图像数据,以及向所 述AP发送第一信息,所述第一信息用于实现所述目标视频插帧后的图像数据存储至所述存储器中。
  11. 根据权利要求10所述的装置,其中,所述目标信息为标识信息,所述标识信息用于指示所述目标视频已进行插帧处理,所述插帧芯片向所述AP发送目标信息之后,所述AP还用于:
    从服务装置获取所述目标视频插帧后的图像数据,以及将所述目标视频插帧后的图像数据存储于所述存储器中。
  12. 根据权利要求11所述的装置,其中,所述AP从服务装置获取所述目标视频插帧后的图像数据之前,所述AP还用于:
    在目标时段向所述服务装置发送所述目标视频,以使得所述服务装置对所述目标视频进行插帧处理,所述目标时段为非工作时段或所述电子设备的非常用时段。
  13. 根据权利要求11或12所述的装置,其中,所述服务装置集成于所述电子设备中;或者所述服务装置为云端服务器。
  14. 根据权利要求10所述的装置,其中,所述电子设备还包括存储器,所述目标信息为所述目标视频插帧后的图像数据,所述插帧芯片向所述AP发送目标信息之后,所述AP还用于:将所述目标视频插帧后的图像数据存储于所述存储器中。
  15. 根据权利要求11-14任一项所述的装置,其中,所述AP将所述目标视频插帧后的图像数据存储于所述存储器中之后,
    所述AP还用于:在检测到所述目标视频的播放指令时,从所述存储器获取所述目标视频插帧后的图像数据,以及向所述插帧芯片发送所述目标视频插帧后的图像数据;
    所述插帧芯片还用于:向所述显示屏发送接收到的所述目标视频插帧后的图像数据;
    所述显示屏,用于在接收到所述目标视频插帧后的图像数据后,进行显示处理。
  16. 根据权利要求14所述的装置,其中,所述插帧芯片包括输入控制单元、压缩处理单元、输出控制单元和存储控制单元;
    其中,所述插帧芯片的输入接口与所述输入控制单元连接,所述输入控制单元与所述压缩处理单元连接,所述压缩处理单元分别与所述输出控制单元和所述存储控制单元连接,所述输出控制单元分别与所述插帧芯片的第一输出接口、所述插帧芯片的第二输出接口和所述存储控制单元连接,所述存储控制单元与所述存储器连接。
  17. 根据权利要求16所述的装置,其中,所述电子设备还包括拍摄模块,所述目标视频为所述拍摄模块实时拍摄的视频。
  18. 根据权利要求10-17任一项所述的装置,其中,所述AP的输出接口与所述插帧芯片的输入接口连接,所述插帧芯片的第一输出接口与所述AP的输入接口连接,所述插帧芯片的第二输出接口与所述显示屏的输入接口连接。
  19. 一种电子设备,包括处理器、存储器、通信接口、显示屏,以及一个或多个程序,所述处理器包括应用处理器AP和插帧芯片,所述一个或多个程序被存储在所述存储器中,并且被配置由所述AP和所述插帧芯片执行,所述程序包括用于执行如权利要求1-9任一项所述的方法中的步骤的指令。
  20. 一种计算机可读存储介质,存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-9任一项所述的方法。
PCT/CN2021/102574 2020-08-21 2021-06-26 视频数据处理方法及装置 WO2022037251A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010851392.8A CN112004086B (zh) 2020-08-21 2020-08-21 视频数据处理方法及装置
CN202010851392.8 2020-08-21

Publications (1)

Publication Number Publication Date
WO2022037251A1 true WO2022037251A1 (zh) 2022-02-24

Family

ID=73473102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102574 WO2022037251A1 (zh) 2020-08-21 2021-06-26 视频数据处理方法及装置

Country Status (2)

Country Link
CN (1) CN112004086B (zh)
WO (1) WO2022037251A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500853A (zh) * 2022-02-25 2022-05-13 维沃移动通信有限公司 电子设备及图像显示方法
CN116886996A (zh) * 2023-09-06 2023-10-13 浙江富控创联技术有限公司 一种数字乡村多媒体显示屏广播系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112004086B (zh) * 2020-08-21 2022-11-11 Oppo广东移动通信有限公司 视频数据处理方法及装置
CN113835657A (zh) * 2021-09-08 2021-12-24 维沃移动通信有限公司 显示方法及电子设备
CN113852776B (zh) * 2021-09-08 2024-06-04 维沃移动通信有限公司 插帧方法及电子设备
CN114285959A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 图像处理电路、方法、装置、电子设备及芯片
CN114338953A (zh) * 2021-12-28 2022-04-12 维沃移动通信有限公司 视频处理电路、视频处理方法和电子设备
CN114285956A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 视频分享电路、方法、装置及电子设备
CN114285958A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 图像处理电路、图像处理方法和电子设备
CN114338954A (zh) * 2021-12-28 2022-04-12 维沃移动通信有限公司 视频生成电路、方法和电子设备
CN114285978A (zh) * 2021-12-28 2022-04-05 维沃移动通信有限公司 视频处理方法、视频处理装置和电子设备
CN114443894A (zh) * 2022-01-05 2022-05-06 荣耀终端有限公司 数据处理方法、装置、电子设备和存储介质
CN115514902A (zh) * 2022-09-09 2022-12-23 维沃移动通信有限公司 图像处理电路和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080170161A1 (en) * 2006-12-28 2008-07-17 Hitachi, Ltd. Image processing apparatus and image display apparatus provided with the same
US20130127887A1 (en) * 2006-09-19 2013-05-23 Industrial Technology Research Institute Method for storing interpolation data
CN111050149A (zh) * 2019-12-24 2020-04-21 苏州乐梦光电科技有限公司 用于投影系统的视频处理方法、装置、设备及存储介质
CN111083417A (zh) * 2019-12-10 2020-04-28 Oppo广东移动通信有限公司 图像处理方法及相关产品
CN111147787A (zh) * 2019-12-27 2020-05-12 Oppo广东移动通信有限公司 插帧处理方法及相关设备
CN112004086A (zh) * 2020-08-21 2020-11-27 Oppo广东移动通信有限公司 视频数据处理方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244713B2 (en) * 2007-07-12 2012-08-14 International Business Machines Corporation Content management system that retrieves data from an external data source and creates one or more objects in the repository
CN101977218B (zh) * 2010-10-20 2013-11-27 深圳市融创天下科技股份有限公司 一种互联网播放文件转码方法和系统
CN104469241B (zh) * 2014-11-28 2018-01-16 中国航空无线电电子研究所 一种实现视频帧率变换的装置
US10904535B2 (en) * 2017-04-01 2021-01-26 Intel Corporation Video motion processing including static scene determination, occlusion detection, frame rate conversion, and adjusting compression ratio
CN108600783B (zh) * 2018-04-23 2021-03-30 深圳齐心好视通云计算有限公司 一种帧率调节方法、装置及终端设备
CN110874128B (zh) * 2018-08-31 2021-03-30 上海瑾盛通信科技有限公司 可视化数据处理方法和电子设备
CN111225150B (zh) * 2020-01-20 2021-08-10 Oppo广东移动通信有限公司 插帧处理方法及相关产品

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130127887A1 (en) * 2006-09-19 2013-05-23 Industrial Technology Research Institute Method for storing interpolation data
US20080170161A1 (en) * 2006-12-28 2008-07-17 Hitachi, Ltd. Image processing apparatus and image display apparatus provided with the same
CN111083417A (zh) * 2019-12-10 2020-04-28 Oppo广东移动通信有限公司 图像处理方法及相关产品
CN111050149A (zh) * 2019-12-24 2020-04-21 苏州乐梦光电科技有限公司 用于投影系统的视频处理方法、装置、设备及存储介质
CN111147787A (zh) * 2019-12-27 2020-05-12 Oppo广东移动通信有限公司 插帧处理方法及相关设备
CN112004086A (zh) * 2020-08-21 2020-11-27 Oppo广东移动通信有限公司 视频数据处理方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500853A (zh) * 2022-02-25 2022-05-13 维沃移动通信有限公司 电子设备及图像显示方法
WO2023160669A1 (zh) * 2022-02-25 2023-08-31 维沃移动通信有限公司 电子设备及图像显示方法
CN116886996A (zh) * 2023-09-06 2023-10-13 浙江富控创联技术有限公司 一种数字乡村多媒体显示屏广播系统
CN116886996B (zh) * 2023-09-06 2023-12-01 浙江富控创联技术有限公司 一种数字乡村多媒体显示屏广播系统

Also Published As

Publication number Publication date
CN112004086A (zh) 2020-11-27
CN112004086B (zh) 2022-11-11

Similar Documents

Publication Publication Date Title
WO2022037251A1 (zh) 视频数据处理方法及装置
KR102221023B1 (ko) 이미지를 처리하는 전자장치 및 방법
US20240020074A1 (en) Multi-Window Projection Method and Electronic Device
WO2021147657A1 (zh) 插帧处理方法及相关产品
CN108762881B (zh) 界面绘制方法、装置、终端及存储介质
EP4060475A1 (en) Multi-screen cooperation method and system, and electronic device
CN112394895A (zh) 画面跨设备显示方法与装置、电子设备
KR20150082940A (ko) 화면의 회전을 컨트롤할 수 있는 전자 장치 및 방법
CN111813490A (zh) 插帧处理方法及装置
WO2022161227A1 (zh) 图像处理方法、装置、图像处理芯片和电子设备
WO2017202175A1 (zh) 一种视频压缩方法、装置及电子设备
WO2022083465A1 (zh) 电子设备的投屏方法及其介质和电子设备
CN112398855A (zh) 应用内容跨设备流转方法与装置、电子设备
CN112328941A (zh) 基于浏览器的应用投屏方法及相关装置
US20240061569A1 (en) Method for performing frame interpolation in interface display process and terminal device
WO2023125657A1 (zh) 图像处理方法、装置和电子设备
WO2023231655A9 (zh) 弹幕识别方法和相关装置
US20230315148A1 (en) Interface Display Method and Electronic Device
WO2023193598A1 (zh) 一种图像处理方法、装置、设备及存储介质
US9135036B2 (en) Method and system for reducing communication during video processing utilizing merge buffering
CN112260845B (zh) 进行数据传输加速的方法和装置
WO2022111585A1 (zh) 一种图像画面自适应裁剪的方法及电子设备
CN117406654B (zh) 音效处理方法和电子设备
WO2024109443A1 (zh) 一种设备连接方法、设备及系统
US11756151B1 (en) End-cloud collaborative media data processing method and apparatus, device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21857347

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21857347

Country of ref document: EP

Kind code of ref document: A1