CN109120938B - Camera intermediate layer image processing method and system on chip - Google Patents

Camera intermediate layer image processing method and system on chip Download PDF

Info

Publication number
CN109120938B
CN109120938B CN201710493813.2A CN201710493813A CN109120938B CN 109120938 B CN109120938 B CN 109120938B CN 201710493813 A CN201710493813 A CN 201710493813A CN 109120938 B CN109120938 B CN 109120938B
Authority
CN
China
Prior art keywords
data
decoding
unit
thread
hardware
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710493813.2A
Other languages
Chinese (zh)
Other versions
CN109120938A (en
Inventor
赵丙山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201710493813.2A priority Critical patent/CN109120938B/en
Publication of CN109120938A publication Critical patent/CN109120938A/en
Application granted granted Critical
Publication of CN109120938B publication Critical patent/CN109120938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/547Messaging middleware

Abstract

A Camera intermediate layer image processing method and a system on chip are provided, wherein abnormal or lost video data are repaired in advance by adding fault-tolerant processing, the error reporting probability during decoding the video data is reduced, hardware decoding is performed through a decoder as far as possible, the probability of calling software decoding is reduced, and the decoding performance of the system on chip is improved. Meanwhile, the invention also adds a filtering treatment step to meet the requirements of subsequent modules and improve the display performance of the system. The invention also binds the multi-core central processing unit of the system on chip with different threads respectively, realizes the parallel operation of software and hardware and further improves the display performance of the system on chip. In addition, from the view point of display quality, because the invention selects a better decoding data format aiming at the system on chip, the invention can further provide higher display quality while ensuring the display performance.

Description

Camera intermediate layer image processing method and system on chip
Technical Field
The invention relates to a Camera intermediate layer technology, in particular to an image processing method of a Camera intermediate layer and a system on a chip.
Background
With the popularization and development of mobile network embedded systems, the functions of System on Chip (SoC) in embedded devices are more and more powerful, and therefore, the applications of embedded devices are more and more extensive, for example, a user can install various multimedia applications (APP, application) such as a video call APP, an online conference APP and the like into the embedded System, so as to communicate with the other party in real time. Taking an online conference as an example, the requirement on the image display effect is high, so that the real-time performance and the fluency of image display are important indexes for people to measure and purchase embedded equipment.
The application scene of the online conference generally occurs between two set-top box terminals or between a set-top box terminal and a mobile phone or other network equipment supporting the function. Two devices in communication need to be ensured to be communicated with the application platform server, so that the two devices can communicate with each other. And installing related applications on the equipment, registering an account, logging in, and dialing the account of the opposite terminal needing communication to carry out video call after the login is successful.
The multimedia APP is based on real-time playing of video in the running process, in other words, encoding and decoding of video are an important component of an embedded system. Unlike the operating system on a Personal Computer (PC), the encoding and decoding speed and effect of the embedded device are currently unsatisfactory due to the conditional restrictions of the hardware of the embedded system itself. Since the video playing involves image decoding, and the computing power of the Central Processing Unit (CPU) of the conventional system on chip is limited, if the image encoding and decoding are processed by the central processing unit, a large load is imposed on the CPU.
In the prior art, a system on chip of an embedded device often encounters the following two situations when video image coding and decoding are performed: one is an image encoder and an image decoder that do not have a dedicated function for performing graphics processing; one is that there is corresponding hardware processing, but the capability of the hardware processing is limited, and the processing is completely impossible for the slice source with exception. In both cases, the graphics encoding and decoding operations are directly or indirectly handed over to the CPU for processing, thereby increasing the load of the CPU and reducing the overall performance of the system. On the user plane, the main expression is that the interface display is stuck during the video conference call, and the user experience is finally influenced.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide a Camera middle layer image processing method and a system on a chip.
Firstly, in order to achieve the above object, a Camera intermediate layer image processing method is provided, which includes the following steps:
the first step, receiving a calling request;
secondly, responding to the call request, and initializing software resources and hardware resources;
thirdly, judging whether hardware decoding is supported or not according to the initialization result, judging whether fault-tolerant processing is carried out or not according to the video data in the calling request, and jumping to the fourth step if the fault-tolerant processing is carried out; otherwise, jumping to the fifth step;
fourthly, analyzing the video data, judging whether data loss or abnormity exists or not, and directly jumping to the fifth step if the data loss or abnormity does not exist; otherwise, firstly repairing the data and then jumping to the fifth step;
step five, if the hardware decoding is judged to be supported in the step three, a decoder interface is called, and the decoder performs hardware decoding on the video data; otherwise, calling a software library to perform software decoding on the video data; obtaining decoded data;
sixthly, filtering, namely analyzing the decoded data obtained in the fifth step, detecting whether data abnormity exists, if so, discarding the decoded data with data abnormity, and jumping to the seventh step; otherwise, directly jumping to the seventh step;
a seventh step of outputting the decoded data to a frame buffer device; or the decoded data is output to a network layer after being subjected to OMX framework layer coding.
Further, in the fourth step of the above method, the fault tolerant process analyzes the video data according to the encoding standard protocol of the video data, and determines whether the field information of the video data conforms to the encoding standard protocol; repairing field information of the video data which does not conform to the encoding standard protocol; the repair includes supplementing missing information or skipping superfluous information.
Meanwhile, in the sixth step of the method, the filtering includes detecting data abnormality by a block detection method;
the block detection method includes detecting Y, U, V one or more of the components in the decoded data.
Secondly, in order to achieve the above object, a Camera intermediate layer architecture for image processing is further provided, which includes a receiving unit, an initializing unit, a determining unit, a decoding unit, a filtering unit, and an output unit, which are connected in sequence, and further includes a fault-tolerant unit:
the receiving unit is used for receiving a call request of an application layer, and acquiring video data and a processing task carried in the call request;
the initialization unit is used for responding to the processing task, initializing the Camera intermediate layer, obtaining configuration information, and configuring software resources and hardware resources of the Camera intermediate layer according to the configuration information;
the judging unit is used for judging whether hardware decoding is supported or not according to the configuration information, judging whether fault-tolerant processing is carried out or not according to the video data carried in the calling request, if the fault-tolerant processing is carried out, calling the fault-tolerant processing unit, and if the fault-tolerant processing is not carried out, calling the decoding unit;
the fault-tolerant unit is connected between the judging unit and the decoding unit and is used for repairing the lost or abnormal video data;
the decoding unit is used for decoding the video data and outputting decoded data;
the filtering unit is used for detecting whether the decoded data has abnormity or not and discarding the decoded data with abnormity;
the output unit is used for buffering the decoded data output by the filtering unit and locally displaying the decoded data; the output unit is further configured to perform OMX framework layer coding on the decoded data to generate H264 data, perform RTP packing on the coded H264 data, and output the packed data to a network layer.
Further, in the Camera intermediate layer, the decoding unit includes a hardware decoder and a software decoding unit:
the decoding unit calls a hardware decoder to perform hardware decoding when the judging unit judges that the hardware decoding is supported; and calling a software decoding unit to perform software decoding when the judging unit judges that the hardware decoding is not supported or when response information fed back by the hardware decoder is in error.
Further, in the Camera middle layer, the software decoding unit includes one or more of an FFMPEG open source library or an LIBJPEG open source library.
Specifically, in the Camera intermediate layer, the format of the decoded data output by the decoding unit includes one or more of NV12, RGB888, RGB565, and YUV.
The invention also provides a system on chip for realizing the Camera intermediate layer image processing method, wherein the system on chip comprises a multi-core central processing unit and is characterized in that each central processing unit is respectively bound with a thread, and each thread corresponds to a queue; the queue respectively outputs the processing result of the thread to the next thread; all threads in the multi-core central processing unit are processed in parallel; the threads include a fault tolerant processing thread, a decoding thread, a filtering processing thread, and an output thread.
Specifically, in the system on chip:
the fault-tolerant processing thread is used for judging whether the video data has data loss or abnormity, and if the video data does not have data loss or abnormity, the video data is directly output to the decoding thread through the first queue; otherwise, data restoration is carried out on lost or abnormal data, and then the video data are output to the decoding thread through the first queue;
the decoding thread is used for calling a decoder interface, and the decoder performs hardware decoding on the video data; or when the decoder interface can not be called or when response information fed back by the called decoder interface reports errors, calling a software library to perform software decoding on the video data; outputting the decoded data to the filtering processing thread through a second queue;
the filtering processing thread is used for detecting whether the decoded data has data abnormality or not, discarding the decoded data with data abnormality, and outputting the decoded data without data abnormality to the output thread through a third queue;
the output thread is used for outputting the decoded data to a frame buffer device; or after OMX framework layer coding is carried out on the decoded data, the decoded data are packaged and output to a network layer.
Furthermore, the invention also provides an embedded device adopting the system on chip, and the embedded device comprises one or more of a set top box, a smart phone, a tablet computer, a navigator, a vehicle-mounted television and a personal digital assistant terminal.
Advantageous effects
The invention processes the error in the data respectively before decoding and after decoding by adding the steps of fault-tolerant processing and filtering processing. Before decoding, the error data is corrected in advance through fault-tolerant processing, the probability of successfully calling and decoding hardware is improved, the load of a CPU (Central processing Unit) for decoding software is reduced, and the display performance is improved.
And filtering the decoded data, screening abnormal data, not transmitting or displaying, and improving the fault tolerance of subsequent modules, thereby further improving the display performance of the system.
Furthermore, the invention fully utilizes the advantages of the multi-core CPU in the system, binds different threads with different CPUs, and realizes the data transmission between the threads through the queue. Therefore, fault tolerance, decoding, filtering and output parallel processing are realized, the system performance is further improved, and the smoothness degree of video pictures is improved.
From the consideration of display quality, the method selects the NV12 format for output in the decoding process, can be realized in the decoding process, avoids the step of RGB format conversion, and simultaneously avoids the yellowing and the water ripple caused by the step of RGB format conversion. The method has the advantages that the data bit width is reduced while the display picture quality is ensured, the system resources occupied by subsequent processing are reduced, and the picture quality is improved while the display performance of the system is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a Camera intermediate layer architecture diagram according to the present invention;
FIG. 2 is a flow chart of a Camera intermediate layer image processing method according to the present invention;
FIG. 3 is a block diagram of a Camera intermediate layer image processing system on a chip according to the present invention;
FIG. 4 is a flowchart of the fault tolerant processing steps in the Camera intermediate layer image processing method according to the present invention;
FIG. 5 is a flow chart of a filtering process step in a Camera intermediate layer image processing method according to the present invention;
FIG. 6 is a schematic diagram of a Camera middle layer image processing system-on-chip thread interaction according to the present invention;
FIG. 7 is a diagram of a conventional Camera middle layer software decoding architecture;
FIG. 8 is a diagram of a conventional Camera middle layer hardware-software joint decoding architecture;
FIG. 9 is a diagram illustrating the processing steps of the system-on-chip during a video call.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 9, in the existing application scenario, a video call is generally performed through the steps shown in the figure. The image acquisition device firstly acquires the MJPEG image in real time and transmits the acquired image to the Camera middle layer. The Camera intermediate layer is used for carrying out fault tolerance, scheduling and filtering on the acquired data; distributing and processing the decoded data, and performing other processing on the decoded data, wherein the operations are completed by a CPU;
the decoding step is carried out by an image decoder, and the decoded image data comprises a locally acquired MJPEG image and a remote H264 video stream; for the resulting video stream, the former is used for subsequent H264 encoding and the latter is used for local display;
the decoded locally acquired image data also needs to be encoded by an OMX graphic encoder to generate a highly compressed H264 data stream;
the H264 data stream is transmitted remotely through the network layer to the remote end, and after H264 decoding is performed by the remote end device, the H264 data stream is displayed on the device.
Fig. 1 is a Camera interlayer architecture diagram according to the present invention, and a corresponding method flow diagram is shown in fig. 2. The Camera intermediate layer is a core part of the system, and the fault-tolerant processing of the original data and the continuous filtering of the decoded data are innovation points of the Camera intermediate layer; it undertakes management of different data queues, fault-tolerant algorithm related to data, dispatching and binding processing of tasks among different CPUs, and the like;
the system on chip, the image coding and decoding method, the Camera intermediate layer and the embedded device provided by the embodiment of the invention are characterized in that the Camera intermediate layer is used for operating the intermediate layer; the image decoder is used for acquiring configuration information, image data to be decoded and processing tasks corresponding to the image data from the middle layer; the image decoder is also used for processing the image data according to the processing task and the configuration information and outputting the processed data to an encoder or local output; the encoder is used for recoding the data in the YUV format to generate an H264 format; the Camera intermediate layer is further configured to acquire the configuration information, the image data, and the processing task from a frame buffer device, and perform fault-tolerant processing on the image data according to the processing task and the configuration information to obtain data that can be decoded by hardware, so as to invoke hardware implementation; therefore, the overall performance of the system on chip can be improved, the image display speed is further improved, and finally the user experience is improved.
In this embodiment, the method flow corresponding to the Camera intermediate layer specifically includes the following steps:
the first step, receiving a calling request;
secondly, responding to the call request, and initializing software resources and hardware resources;
thirdly, judging whether hardware decoding is supported or not according to the initialization result, judging whether fault-tolerant processing is carried out or not according to the video data in the calling request, and jumping to the fourth step if the fault-tolerant processing is carried out; otherwise, jumping to the fifth step;
fourthly, analyzing the video data, judging whether data loss or abnormity exists or not, and directly jumping to the fifth step if the data loss or abnormity does not exist; otherwise, firstly repairing the data and then jumping to the fifth step;
step five, if the hardware decoding is judged to be supported in the step three, a decoder interface is called, and the decoder performs hardware decoding on the video data; otherwise, calling a software library to perform software decoding on the video data; obtaining decoded data;
sixthly, filtering, namely analyzing the decoded data obtained in the fifth step, detecting whether data abnormity exists, if so, discarding the decoded data with data abnormity, and jumping to the seventh step; otherwise, directly jumping to the seventh step;
a seventh step of outputting the decoded data to a frame buffer device; or the decoded data is output to a network layer after being subjected to OMX framework layer coding.
Specifically, referring to fig. 4, in the fourth step of the above method, the fault tolerant process analyzes each component of the video data one by one according to the coding related standard protocol of the video data to further determine whether the related field information satisfies the coding standard protocol; if the abnormal data field is not satisfied, the abnormal data field is repaired, and the repairing operation is mainly completed by supplementing some missing information or skipping some redundant information.
Meanwhile, referring to fig. 5, in the sixth step of the above method, the filtering process mainly detects the decoded data by a block detection method, and for different abnormal data (for example, when a specific area of the decoded partial data has a white bar, color distortion, etc.), detects the data by corresponding block detection algorithms respectively, and discards the data if an abnormality is detected; the core of the algorithm is based on the principle of block detection; only one or more of the components may be detected Y, U, V for different abnormal situations.
To achieve the purpose of simultaneously improving display performance and display quality, referring to fig. 1 and fig. 3, the embodiment further provides a Camera intermediate layer architecture for image processing, which includes a receiving unit, an initializing unit, a determining unit, a decoding unit, a filtering unit, and an output unit, which are connected in sequence, and further includes a fault-tolerant unit:
the receiving unit is used for receiving a call request of an application layer, and receiving video data and a processing task carried in the call request acquired from an external device (a camera) and the like in real time;
the initialization unit is used for responding to the processing task, initializing the Camera intermediate layer, obtaining configuration information, and configuring software resources and hardware resources of the Camera intermediate layer according to the configuration information; the configuration information includes address, resolution, size, and the like. Initialization of the coded hardware and software resources includes: the method comprises the steps of initializing a queue to be coded, initializing a fault-tolerant queue, initializing a filtering queue, creating a corresponding thread, binding a CPU and the like.
The judging unit is used for judging whether hardware decoding is supported or not according to the configuration information, judging whether fault-tolerant processing is carried out or not according to the video data carried in the calling request, if the fault-tolerant processing is carried out, calling the fault-tolerant processing unit, and if the fault-tolerant processing is not carried out, calling the decoding unit; in the specific implementation process, determining whether the soc supports hardware acceleration includes a series of steps, for example, first determining whether the soc includes a hardware decoder, and when the soc includes the hardware decoder, the soc supports hardware acceleration; when the system on chip comprises a hardware decoder, judging whether the hardware decoder of the system on chip is normal, if so, the system on chip supports hardware acceleration, and if not, the system on chip does not support hardware acceleration;
the fault-tolerant unit is connected between the judging unit and the decoding unit and is used for repairing the lost or abnormal video data;
the decoding unit is used for decoding the video data and outputting decoded YUV data, and comprises: determining a format, a position, a size, a resolution of a graphical element in the image data; when the decoding unit completes decoding, the Camera intermediate layer acquires response information from the image decoder; here, the response information is used to indicate whether an image decoder successfully decodes the image data using the processing task and the configuration information, where the graphics picture is to-be-displayed data or to-be-encoded data after the image decoder completes decoding of the image data according to the processing task and the configuration information.
The filtering unit is used for detecting whether the decoded data has abnormity or not and discarding the decoded data with abnormity;
the output unit is used for buffering the decoded data output by the filtering unit and locally displaying the decoded data; the output unit is further configured to perform OMX framework layer coding on the decoded data to generate H264 data, perform RTP packing on the coded H264 data, and output the packed data to a network layer.
In the above Camera intermediate layer, the decoding unit includes a hardware decoder and a software decoding unit:
the decoding unit calls a hardware decoder to perform hardware decoding when the judging unit judges that the hardware decoding is supported; and calling a software decoding unit to perform software decoding when the judging unit judges that the hardware decoding is not supported or when response information fed back by the hardware decoder is in error.
Further, in the Camera middle layer, the software decoding unit includes one or more of an FFMPEG open source library or an LIBJPEG open source library.
Specifically, in the Camera intermediate layer, the format of the decoded data output by the decoding unit includes one or more of NV12, RGB888, RGB565, and YUV.
The middle layer is used as a bridge for communicating data acquisition and data display, and the processing of the middle layer is mainly based on two factors of display quality and display performance. The purpose is to: the method comprises the steps of firstly, using less data bit width as far as possible, secondly, performing format conversion as little as possible, and thirdly, fully utilizing CPU resources and hardware resources.
From the display performance, in order to further fully utilize the hardware performance of the middle layer, a fault-tolerant unit and a filtering unit are added. Because the operations of the two units are realized by the CPU, in order to further reduce the additional influence caused by excessive CPU operations, the parallel operation is carried out by adding queues and opening threads and binding different CPUs at present, thereby ensuring the ordered maintenance of data and improving the parallel processing capacity of hardware and software. According to one scenario as an example, the current intermediate layer processing includes the following stages:
phases Time Phases
1. Data fault tolerant operation Average 12ms 1. Data fault tolerant operation
2. Data decoding operation 22ms 2. Data decoding operation
3. Data filtering operations 6.92400ms 3. Data filtering operations
TABLE 1
Based on the above technology, referring to fig. 6, this embodiment further provides a system on chip for implementing the Camera intermediate layer image processing method, where the system on chip includes multiple core central processing units, and is characterized in that each central processing unit is respectively bound with one thread, and each thread corresponds to one queue; the queue respectively outputs the processing result of the thread to the next thread; all threads in the multi-core central processing unit are processed in parallel; the threads include a fault tolerant processing thread, a decoding thread, a filtering processing thread, and an output thread.
Specifically, in the system on chip:
the fault-tolerant processing thread recitifythread is bound with the CPU0 and is used for judging whether the video data has data loss or abnormality, and if the video data does not have data loss or abnormality, the video data is directly output to a decoding thread through a first queue; otherwise, data restoration is carried out on lost or abnormal data, and then the video data are output to the decoding thread through the first queue;
the decoding thread captureThread is bound with the CPU1, and is used to invoke a decoder interface, and the decoder performs hardware decoding on the video data; or when the decoder interface can not be called or when response information fed back by the called decoder interface reports errors, calling a software library to perform software decoding on the video data; outputting the decoded data to the filtering processing thread through a second queue;
the filter processing thread postprocessThread is bound with the CPU2 and is used for detecting whether the decoded data has data exception, discarding the decoded data with data exception, and outputting the decoded data without data exception to the output thread through a third queue;
the output thread displayThread is bound with the CPU3 and is configured to output the decoded data to a frame buffer device; or after OMX framework layer coding is carried out on the decoded data, the decoded data are packaged and output to a network layer.
From the viewpoint of display quality, the hardware decoding and the software decoding can output YUV format (NV12, etc.) and RGB format (RGB565, RGB888, etc.), and the latter has YUV-to-RGB conversion more than the former, and the operation is realized inside the decoding. Experiments show that if YUV-to-RGB conversion is carried out, the problems of yellowing, water ripple and the like exist in the data color, and the problems are caused by data loss and a conversion algorithm in the conversion process. Decoding the supported output data format and the corresponding bit width NV12(12 bit); RGB565(16 bit); three RGB888(24bit) formats, NV12 format data bit width is relatively small. And (3) finding a performance test result through verification: NV12> RGB565> RGB 888; and (3) display effect: NV12> RGB888> RGB 565.
Still further, the above technology may be implemented by an embedded device using the above system on a chip, where the embedded device includes one or more of a set-top box, a smart phone, a tablet computer, a navigator, a car television, and a personal digital assistant terminal. The system on chip comprises Camera acquisition equipment, a Camera intermediate layer, an image decoder, an image encoder, frame buffer equipment and a network layer, wherein:
the Camera intermediate layer is used for processing the acquired data and belongs to the core part of the system;
the image decoder is used for acquiring configuration information, image data to be decoded and processing tasks corresponding to the image data from the middle layer;
the frame buffer device is used for displaying the data which is transmitted and filtered locally or remotely;
and the network layer is used for establishing connection with the Internet.
In the prior art, two situations are divided in a system on chip of an embedded device: one is an image encoder and an image decoder that do not have a dedicated function for performing graphics processing; one is that there is corresponding hardware processing, but the capability of hardware processing is limited, there are some cases that the abnormal slice source can not be processed at all; in both cases, the graphics encoding and decoding operations are directly or indirectly handed over to the CPU for processing, which increases the load on the CPU and thus reduces the overall performance of the system.
As shown in fig. 7, the encoding and decoding method using the FFMPEG graphics library is a pure software encoding and decoding method, and all graphics operations are completely handed to the CPU for processing, which increases the load of the CPU, thereby reducing the overall performance of the system and further causing a slow speed of graphics display.
The system on chip shown in fig. 8 adopts the operation of firstly performing hardware encoding and decoding and secondly performing software encoding and decoding when performing data software and hardware decoding, so that the system performance is effectively improved, and the software is used for the characteristic of limited processing capability of compatible hardware. However, in the using process, the hardware decoding is often limited by data errors and cannot be effectively performed.
Compared with the prior art shown in fig. 7 or fig. 8, an embodiment of the present invention provides an image processing method with a fault-tolerant capability and a filtering capability based on hardware codec, and the image processing method with a fault-tolerant capability and a filtering capability based on hardware codec can overcome the problem of slow image display speed of a system on chip.
The FFMPEG or LIBJPEG open source graphic library is combined with a software mode to carry out graphic processing. The software mode has the disadvantages of low performance and insufficient smoothness of picture display when the graph of a complex scene is displayed. The software mode mainly calls standard functions provided by the open source library
Serial number Test item Resolution ratio Hardware mode Software mode
01 Frame rate 1920x1080 50fps 30fps
TABLE 2
In table 2, the software mode and the hardware mode are in the same environment, and as can be seen from the effect achieved by the software mode and the hardware mode in row 01, the hardware mode is far better than the software mode.
The technical scheme of the invention has the advantages that: the method has the advantages that the original software mode operation is kept, meanwhile, a hardware mode is added, and meanwhile, fault-tolerant processing is added. An image codec needs to be added from the perspective of hardware design; from the software perspective, kernel state drive and user state drive need to be realized; the kernel mode is used for realizing image coding and decoding which need to be operated when the linux system architecture is realized, and the user mode is used for finishing the butt joint operation with Camera Hal and OMX.
Those of ordinary skill in the art will understand that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A Camera intermediate layer image processing method is characterized by comprising the following steps:
the first step, receiving a calling request;
secondly, responding to the call request, and initializing software resources and hardware resources; the initialization of the software resources and the hardware resources comprises the following steps: initializing a queue to be coded, initializing a fault-tolerant queue, initializing a filtering queue, creating a corresponding thread and binding a CPU (central processing unit);
thirdly, judging whether hardware decoding is supported or not according to the initialization result, judging whether fault-tolerant processing is carried out or not according to the video data in the calling request, and jumping to the fourth step if the fault-tolerant processing is carried out; otherwise, jumping to the fifth step;
fourthly, analyzing the video data, judging whether data loss or abnormity exists or not, and directly jumping to the fifth step if the data loss or abnormity does not exist; otherwise, firstly repairing the data and then jumping to the fifth step;
step five, if the hardware decoding is judged to be supported in the step three, a decoder interface is called, and the decoder performs hardware decoding on the video data; otherwise, calling a software library to perform software decoding on the video data; obtaining decoded data;
sixthly, filtering, namely analyzing the decoded data obtained in the fifth step, detecting whether data abnormity exists, if so, discarding the decoded data with data abnormity, and jumping to the seventh step; otherwise, directly jumping to the seventh step;
a seventh step of outputting the decoded data to a frame buffer device; or the decoded data is output to a network layer after being subjected to OMX framework layer coding.
2. The Camera intermediate layer image processing method according to claim 1, wherein in the fourth step, the fault tolerant process parses the video data according to an encoding standard protocol of the video data, and determines whether field information of the video data conforms to the encoding standard protocol; repairing field information of the video data which does not conform to the encoding standard protocol;
the repair includes supplementing missing information or skipping superfluous information.
3. The Camera intermediate layer image processing method according to claim 1, wherein in the sixth step, the filtering process includes detecting data abnormality by a block detection method;
the block detection method includes detecting Y, U, V one or more of the components in the decoded data.
4. A Camera intermediate layer for image processing is characterized by comprising a receiving unit, an initialization unit, a judgment unit, a decoding unit, a filtering unit and an output unit which are sequentially connected, and further comprising a fault-tolerant unit;
the receiving unit is used for receiving a call request of an application layer, and acquiring video data and a processing task carried in the call request;
the initialization unit is used for responding to the processing task, initializing the Camera intermediate layer, obtaining configuration information, and configuring software resources and hardware resources of the Camera intermediate layer according to the configuration information;
the judging unit is used for judging whether hardware decoding is supported or not according to the configuration information, judging whether fault-tolerant processing is carried out or not according to the video data carried in the calling request, if the fault-tolerant processing is carried out, calling the fault-tolerant unit, and if not, calling the decoding unit;
the fault-tolerant unit is connected between the judging unit and the decoding unit and is used for repairing the lost or abnormal video data;
the decoding unit is used for decoding the video data and outputting decoded data;
the filtering unit is used for detecting whether the decoded data has abnormity or not and discarding the decoded data with abnormity;
the output unit is used for buffering the decoded data output by the filtering unit and locally displaying the decoded data; the output unit is further configured to perform OMX framework layer coding on the decoded data, package the coded data, and output the packaged data to a network layer;
wherein the decoding unit comprises a hardware decoder and a software decoding unit;
the decoding unit calls a hardware decoder to perform hardware decoding when the judging unit judges that the hardware decoding is supported; and calling a software decoding unit to perform software decoding when the judging unit judges that the hardware decoding is not supported or when response information fed back by the hardware decoder is in error.
5. The Camera interlayer for image processing as recited in claim 4, wherein the software decoding unit comprises one or more of an FFMPEG open source library or an LIBKPECT open source library.
6. The Camera intermediate layer for image processing as recited in claim 4, wherein a format in which the decoding unit decodes output of the decoded data comprises one or more of NV12, RGB888, RGB565, or YUV.
7. A system on chip for performing video image processing according to the Camera intermediate layer image processing method of claim 1, the system on chip comprising a plurality of central processing units, wherein each central processing unit is bound with a thread, and each thread corresponds to a queue; the queue respectively outputs the processing result of the thread to the next thread; all threads in the multi-core central processing unit are processed in parallel;
the threads include a fault tolerant processing thread, a decoding thread, a filtering processing thread, and an output thread.
8. The system on a chip of claim 7, wherein the fault tolerant processing thread is configured to determine whether there is data loss or an exception for the video data, and if there is no data loss or an exception, output the video data to a decoding thread directly through a first queue; otherwise, data restoration is carried out on lost or abnormal data, and then the video data are output to the decoding thread through the first queue;
the decoding thread is used for calling a decoder interface, and the decoder performs hardware decoding on the video data; or when the decoder interface can not be called or when response information fed back by the called decoder interface reports errors, calling a software library to perform software decoding on the video data; outputting the decoded data to the filtering processing thread through a second queue;
the filtering processing thread is used for detecting whether the decoded data has data abnormality or not, discarding the decoded data with data abnormality, and outputting the decoded data without data abnormality to the output thread through a third queue;
the output thread is used for outputting the decoded data to a frame buffer device; or after OMX framework layer coding is carried out on the decoded data, the decoded data are packaged and output to a network layer.
9. An embedded device using the system on chip of claim 7 or 8, wherein the embedded device comprises one or more of a set top box, a smart phone, a tablet computer, a navigator, a car television, and a personal digital assistant terminal.
CN201710493813.2A 2017-06-26 2017-06-26 Camera intermediate layer image processing method and system on chip Active CN109120938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710493813.2A CN109120938B (en) 2017-06-26 2017-06-26 Camera intermediate layer image processing method and system on chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710493813.2A CN109120938B (en) 2017-06-26 2017-06-26 Camera intermediate layer image processing method and system on chip

Publications (2)

Publication Number Publication Date
CN109120938A CN109120938A (en) 2019-01-01
CN109120938B true CN109120938B (en) 2021-07-09

Family

ID=64732608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710493813.2A Active CN109120938B (en) 2017-06-26 2017-06-26 Camera intermediate layer image processing method and system on chip

Country Status (1)

Country Link
CN (1) CN109120938B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140072027A1 (en) * 2012-09-12 2014-03-13 Ati Technologies Ulc System for video compression
CN104349220A (en) * 2014-11-25 2015-02-11 复旦大学 Service quality monitoring system for intelligent television terminal
CN104683860B (en) * 2015-02-02 2018-11-30 北京神州天脉网络计算机有限公司 A kind of acoustic-video multi-way concurrently decodes accelerator card and its decoding accelerated method
CN106791922B (en) * 2016-12-20 2019-11-19 杭州当虹科技股份有限公司 A kind of decoding and fault tolerance method for GPU hardware video
CN106878736A (en) * 2017-03-17 2017-06-20 郑州云海信息技术有限公司 A kind of method and apparatus of coding and decoding video

Also Published As

Publication number Publication date
CN109120938A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
US10027970B2 (en) Render-orientation information in video bitstream
US20070230586A1 (en) Encoding, decoding and transcoding of audio/video signals using combined parallel and serial processing techniques
CN109040792B (en) Processing method for video redirection, cloud terminal and cloud desktop server
US20110032995A1 (en) Video encoding and decoding device
CN105163127A (en) Video analysis method and device
US20210120232A1 (en) Method and system of video coding with efficient frame loss recovery
CN104837052A (en) Playing method of multimedia data and device
CN108650460B (en) Server, panoramic video storage and transmission method and computer storage medium
US10158889B2 (en) Replaying old packets for concealing video decoding errors and video decoding latency adjustment based on wireless link conditions
CN112637634B (en) High-concurrency video processing method and system for multi-process shared data
US11259036B2 (en) Video decoder chipset
US9967465B2 (en) Image frame processing method
US11562772B2 (en) Video processing method, electronic device, and storage medium
US20170337655A1 (en) Image processing device, video subsystem and video pipeline
CN109120938B (en) Camera intermediate layer image processing method and system on chip
CN110891195B (en) Method, device and equipment for generating screen image and storage medium
US11323701B2 (en) Systems and methods for group of pictures encoding
CN112203097A (en) Adaptive video decoding method and device, terminal equipment and storage medium
CN111246208A (en) Video processing method and device and electronic equipment
US20110038409A1 (en) Motion graphics keying in the compressed domain
US8189681B1 (en) Displaying multiple compressed video streams on display devices
WO2022134923A1 (en) Image data error detection method, video conference device, and storage medium
CN103327312B (en) The decoding processing method of frame of video and device
CN113395523A (en) Image decoding method, device and equipment based on parallel threads and storage medium
CN105592316A (en) Digital video signal decoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant