CN115834793B - Image data transmission control method in video mode - Google Patents

Image data transmission control method in video mode Download PDF

Info

Publication number
CN115834793B
CN115834793B CN202310122699.8A CN202310122699A CN115834793B CN 115834793 B CN115834793 B CN 115834793B CN 202310122699 A CN202310122699 A CN 202310122699A CN 115834793 B CN115834793 B CN 115834793B
Authority
CN
China
Prior art keywords
module
lcdc
mipi
period
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310122699.8A
Other languages
Chinese (zh)
Other versions
CN115834793A (en
Inventor
陈锋
白颂荣
赖志业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xihua Technology Co Ltd
Original Assignee
Shenzhen Xihua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xihua Technology Co Ltd filed Critical Shenzhen Xihua Technology Co Ltd
Priority to CN202310122699.8A priority Critical patent/CN115834793B/en
Publication of CN115834793A publication Critical patent/CN115834793A/en
Application granted granted Critical
Publication of CN115834793B publication Critical patent/CN115834793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application discloses an image data transmission control method in a video mode, which is applied to chips in an image data processing chip set; the method comprises the following steps: the LCDC module initializes and configures default parameters of the chip; the VPRE module determines the valid data processing time Tvact_in of a single frame of a receiving end; updating the DSI Lane rate, DPI OSC; determining an interval duration LN between T0_2 and T1_2; determining VSA_LCDC and VBP_LCDC; determining t2_3; transmitting T2_3 to the LCDC module; the LCDC module receives T2_3 and sends Vsync_out to the MIPI TX module at T2_3; the MIPI TX module receives vsync_out and transmits the third frame of image data to the display module. The method and the device are favorable for keeping balance between input pixel data and output pixel data when the chip is applied, and the lower screen end can normally display after long-time transmission.

Description

Image data transmission control method in video mode
Technical Field
The present disclosure relates to the field of image data processing technologies, and in particular, to a method for controlling image data transmission in a video mode.
Background
At present, when a user uses terminal equipment to generate screen damage and the like and needs to readjust basic configuration of the MIPI chip set, in the image data processing chip set after replacing part of the modules, because the control time sequence of the input image data of the AP side chip is kept unchanged, the input pixel data and the output pixel data are unbalanced, so that the output of the image data to the screen end is abnormal, and the display of the screen end under long-time image data transmission is difficult to ensure.
Disclosure of Invention
The application provides an image data transmission control method in a video mode, which ensures that the transmission time of one frame of data at an AP input end is longer than the transmission time of one frame of data at an output end, and ensures that an upstream module in a chip set initiates the transmission of the next frame of data, and a downstream module is ready for the transmission of the next frame of data, so that input pixel data and output pixel data can be kept balanced when a chip is applied, and a lower screen end can normally display after long-time transmission.
In a first aspect, the present application provides an image data transmission control method in a Video mode, which is applied to a chip in an image data processing chipset, where the image data processing chipset includes an application processor AP, the chip and a display module, the AP is in communication connection with the chip, the chip is in communication connection with the display module, the chip includes a mobile industry processor interface receiving MIPI RX module, a Video preprocessing VPRE module, an image processing VIDC module, an image display processing module LCDC module and a MIPI TX module, the MIPI RX module is connected to the VPRE module, the VPRE module is connected to the VIDC module, the VIDC is connected to the LCDC, the LCDC module is connected to the MIPI TX module, and the Video mode refers to an output mode of the chip being a Video mode; the method comprises the following steps:
The LCDC module initializes default parameters of configuration chips, including a display interface DSI channel lane rate, a display pixel interface DPI clock frequency OSC, and a buffer line number threshold LT, the LT being configured to maintain: after the LCDC module sends an output end frame synchronization Vsync_out signal to the MIPI TX module, effective data in an internal buffer memory of the LCDC module can be transmitted to the display module by the MIPI TX module, and the effective data in the internal buffer memory is used for compensating measurement and calculation errors of the LCDC module;
in a first period of time during which the chips receive and process the first frame of image data, the VPRE module counts, through an IPI interface, an interval duration LF between a time point t0_1 at which a first receiving end frame synchronization signal vsync_in sent by the AP is received and a time point t0_2 at which a second vsync_in is received, and determines a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI lane rate, the DPI OSC according to the tvact_in; wherein the first period refers to a period between the t0_1 and the t0_2;
in a second period of time during which the chips receive and process second frame image data, the VPRE module determines an interval duration LN between the t0_2 and the t1_2, where t1_2 is a time point when the LCDC module receives a frame start signal sent by the VIDC module in a current period of time, where the frame start signal is used to instruct the LCDC module to start and start transmitting image data; determining a vertical synchronous pixel line number VSA_LCDC and a vertical back shoulder pixel line number VBP_LCDC in a control time sequence of the LCDC module according to the configuration information of the display module; and determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module receives a third vsync_in of the AP, wherein the t2_3 is a time point when the LCDC module starts to start a DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module within a third period when the chip receives and processes third frame image data, and the vsync_out signal is used for indicating the MIPI TX module to start transmitting image data; and sending the t2_3 to the LCDC module; wherein the second period refers to a period between the t0_2 and the t0_3, the third period refers to a period between the t0_3 and the t0_4, and the t0_4 is a time point when the VPRE module receives the vsync_in sent by the AP for the fourth time;
During the third period, the LCDC module receives the t2_3 and sends the vsync_out to the MIPI TX module at the t2_3; and the MIPI TX module receives the Vsync_out and starts to send the third frame image data to the display module.
It can be seen that in the embodiment of the present application, the LCDC module of a chip first initializes default parameters of a configuration chip; secondly, in a first period, the VPRE module counts an interval duration LF between a time point t0_1 when the VPRE module receives a first receiving end frame synchronization signal vsync_in sent by the AP and a time point t0_2 when the VPRE module receives a second vsync_in through an IPI interface, and determines a receiving end single frame valid data processing duration tvact_in; and updating the DSI lane rate, the DPI OSC according to the tvact_in; again, during a second period, the VPRE module determines a duration LN of the interval between t0_2 and t1_2; determining a vertical synchronous pixel row number VSA_LCDC and a vertical back shoulder pixel row number VBP_LCDC in a control time sequence of the LCDC module; determining a time point when the LCDC module sends an output end frame synchronization vsync_out signal to the MIPI TX module in a third period when the t2_3 and t2_3 receive and process third frame image data for the chip; and sending the t2_3 to the LCDC module; finally, during the third period, the LCDC module receives the t2_3 and sends the vsync_out to the MIPI TX module at the t2_3; and the MIPI TX module receives the Vsync_out and starts to send the third frame image data to the display module. It can be seen that the internal module of the chip obtains the t2_3 parameter of the LCDC module through calculation based on the consistency principle of the control timing of the chip receiving end and the control timing of the transmitting end, and starts and transmits data according to the LCDC module of the control chip of t2_3 in the third period, and since LT is used to maintain the following state: after the LCDC module sends an output end frame synchronization Vsync_out signal to the MIPI TX module, effective data exists in an internal buffer memory of the LCDC module and can be transmitted to the display module by the MIPI TX module, and the effective data in the internal buffer memory is used for compensating measurement and calculation errors of the LCDC module, so that input pixel data and output pixel data can be kept balanced when a chip is applied, and a screen end can normally display after long-time transmission.
In a second aspect, the present application provides a chip, applied to an image data processing chipset, where the image data processing chipset includes an application processor AP, the chip and a display module, the AP is communicatively connected to the chip, the chip is communicatively connected to the display module, the chip includes a mobile industry processor interface receiving MIPI RX module, a Video preprocessing VPRE module, an image processing VIDC module, an image display processing module LCDC module, and a MIPI TX module, the MIPI RX module is connected to the VPRE module, the VPRE module is connected to the VIDC module, the VIDC is connected to the LCDC, the LCDC module is connected to the MIPI TX module, and the Video mode refers to that an output mode of the chip is a Video mode; wherein, the liquid crystal display device comprises a liquid crystal display device,
the LCDC module is configured to initialize default parameters of the configuration chip, where the default parameters include a display interface DSI channel lane rate, a display pixel interface DPI clock frequency OSC, and a buffer line number threshold LT, where the LT is configured to maintain the following states: after the LCDC module sends an output end frame synchronization Vsync_out signal to the MIPI TX module, effective data in an internal buffer memory of the LCDC module can be transmitted to the display module by the MIPI TX module, and the effective data in the internal buffer memory is used for compensating measurement and calculation errors of the LCDC module;
In a first period of time during which the chips receive and process the first frame image data, the VPRE module is configured to count, through an IPI interface, an interval duration LF between a time point t0_1 at which the first receiving end frame synchronization signal vsync_in sent by the AP is received and a time point t0_2 at which the second vsync_in is received, and determine a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI lane rate, the DPI OSC according to the tvact_in; wherein the first period refers to a period between the t0_1 and the t0_2;
in a second period of time during which the chips receive and process second frame image data, the VPRE module is further configured to determine an interval duration LN between the t0_2 and t1_2, where t1_2 is a time point when the LCDC module receives a frame start signal sent by the VIDC module in a current period of time, where the frame start signal is used to instruct the LCDC module to start and start transmitting image data; determining a vertical synchronous pixel line number VSA_LCDC and a vertical back shoulder pixel line number VBP_LCDC in a control time sequence of the LCDC module according to the configuration information of the display module; and determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module receives a third vsync_in of the AP, wherein the t2_3 is a time point when the LCDC module starts to start a DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module within a third period when the chip receives and processes third frame image data, and the vsync_out signal is used for indicating the MIPI TX module to start transmitting image data; and sending the t2_3 to the LCDC module; wherein the second period refers to a period between the t0_2 and the t0_3, the third period refers to a period between the t0_3 and the t0_4, and the t0_4 is a time point when the VPRE module receives the vsync_in sent by the AP for the fourth time;
The LCDC module is further configured to receive the t2_3 and send the vsync_out to the MIPI TX module at the t2_3 during the third period; and the MIPI TX module receives the Vsync_out and starts to send the third frame image data to the display module.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic chip structure of an image data processing chipset according to an embodiment of the present application;
fig. 2 is an attribute interaction diagram of an image data transmission control method in a video mode according to an embodiment of the present application;
fig. 3 is a timing chart of an image data transmission control method in a video mode according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The term "at least one" in the present application means one or more, and a plurality means two or more. In the present application and/or describing the association relationship of the association object, the representation may have three relationships, for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one (item) below" or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein each of a, b, c may itself be an element, or may be a collection comprising one or more elements.
It should be noted that, the equality in the embodiment of the present application may be used with a greater than or less than the technical scheme adopted when the equality is greater than or equal to the technical scheme adopted when the equality is less than the technical scheme, and it should be noted that the equality is not used when the equality is greater than the technical scheme adopted when the equality is greater than or equal to the technical scheme adopted when the equality is greater than the technical scheme; when the value is equal to or smaller than that used together, the value is not larger than that used together. "of", corresponding "and" corresponding "in the embodiments of the present application may be sometimes used in combination, and it should be noted that the meaning to be expressed is consistent when the distinction is not emphasized.
Based on the above-mentioned problems, the present application proposes an image data transmission control method in a video mode, which is described in detail below.
Referring to fig. 1, fig. 1 is a schematic diagram of a chip structure of an image data processing chipset according to an embodiment of the present application. The image data processing chipset comprises an Application Processor (AP), a chip 100 and a display module, wherein the AP is in communication connection with the chip 100, and the chip 100 is in communication connection with the display module.
As shown in fig. 1, the chip 100 includes a mobile industry processor interface receiving an MIPI RX module 101, a Video preprocessing VPRE module 102, an image processing VIDC module 103, an image display processing module LCDC module 104, and an MIPI TX module 105, where the MIPI RX module 101 is connected to the VPRE module 102, the VPRE module 102 is connected to the VIDC module 103, the VIDC module 103 is connected to the LCDC module 104, the LCDC module 104 is connected to the MIPI TX module 105, and the Video mode means that the output mode of the chip is a Video mode. In the Video mode, the VIDC module informs the LCDC module of carrying out data synchronous transmission through a frame start signal, and the LCDC module finishes data processing and informs the MIPI TX module of carrying out data synchronous transmission through an output end frame synchronization Vsync_out signal.
Referring to fig. 2, fig. 2 is an attribute interaction diagram of an image data transmission control method in a video mode according to an embodiment of the present application, which is applied to a chip 100 of the image data processing chipset shown in fig. 1; as shown in the figure, the present image data transmission control method includes the following steps.
In step 210, the LCDC module initializes default parameters of the configuration chip, where the default parameters include a display interface DSI channel lane rate, a display pixel interface DPI clock frequency OSC, and a buffer line number threshold LT, where the LT is used to maintain the following states: after the LCDC module sends an output end frame synchronization vsync_out signal to the MIPI TX module, effective data in an internal buffer of the LCDC module may be transmitted to the display module by the MIPI TX module, where the effective data in the internal buffer is used to compensate measurement and calculation errors of the LCDC module.
In order to ensure that valid data can be transmitted after the LCDC sends the Vsync, the LT configures the buffer line number (the buffer of the LCDC is 4K pixel, so the configuration line number is smaller than the hardware buffer size).
Where Vsync refers to a frame synchronization signal indicating the start of a scan 1 frame, a frame, i.e., a picture displayed by the LCD.
Step 220, in a first period of time during which the chip receives and processes the first frame image data, the VPRE module counts, through an IPI interface, an interval duration LF between a time point t0_1 at which the first receiving end frame synchronization signal vsync_in sent by the AP is received and a time point t0_2 at which the second vsync_in is received, and determines a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI lane rate, the DPI OSC according to the tvact_in; wherein the first period refers to a period between the t0_1 and the t0_2.
Wherein, T0 refers to the time when the receiving AP sends the Vsync; t1 refers to the time when the LCDC module receives the frame start of the VIDC module; t2—3 refers to the time when the LCDC module issues the Vsync signal to the MIPI TX module.
The single-frame effective data processing duration refers to the time required by the transmission of Vactive; the determining the receiving end single-frame effective data processing duration tvact_in according to the LF includes:
determining an input total line number vtotal=vsa_in+vbp_in+vactive_in+vfp_in of the single frame image data received by the chip, vsa_in representing a line number of a vertical synchronization pixel in a chip receiving end control timing sequence, vbp_in identifying a line number of a vertical back shoulder pixel in the chip receiving end control timing sequence, vactive_in representing a line number of a vertical effective pixel in the single frame image data received in the chip receiving end control timing sequence, vfp_in representing a line number of a vertical front shoulder pixel in the chip receiving end control timing sequence, the line number representing the number of lines;
Determining the time of receiving single-frame image data by a code chip as Ttotal, namely LF;
determining that the line number of effective data in the single-frame image data received by the code chip is Vactive_in;
determining a single-frame effective data processing duration tvact_in=ttotal_in/vtotal=lf_in/Vtotal.
Step 230, during a second period of time in which the chip receives and processes the second frame of image data, the VPRE module determines an interval length LN between the t0_2 and the t1_2, where t1_2 is a time point when the LCDC module receives a frame start signal sent by the VIDC module in a current period of time, where the frame start signal is used to instruct the LCDC module to start and start transmitting the image data; determining a vertical synchronous pixel line number VSA_LCDC and a vertical back shoulder pixel line number VBP_LCDC in a control time sequence of the LCDC module according to the configuration information of the display module; and determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module receives a third vsync_in of the AP, wherein the t2_3 is a time point when the LCDC module starts to start a DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module within a third period when the chip receives and processes third frame image data, and the vsync_out signal is used for indicating the MIPI TX module to start transmitting image data; and sending the t2_3 to the LCDC module; wherein the second period refers to a period between the t0_2 and the t0_3, the third period refers to a period between the t0_3 and the t0_4, and the t0_4 is a time point when the VPRE module receives the vsync_in sent by the AP for the fourth time.
Step 240, during the third period, the LCDC module receives the t2_3 and sends the vsync_out to the MIPI TX module at the t2_3; and the MIPI TX module receives the Vsync_out and starts to send the third frame image data to the display module.
In one possible example, the determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and the time point t0_3 when the MIPI RX module receives the third vsync_in of the AP includes:
determining t2_3 'from the LN, the vsa_lcdc, the vbp_lcdc, and a time point t0_3 when the MIPI RX module receives the third vsync_in of the AP, the t2_3' being a time point when the LCDC module starts to start up a DPI interface and transmits the vsync_out signal to the MIPI TX module without considering the LT;
determining said t2_3=said t2_3'+ said LT from said LT and said t2_3'.
The value of LT may be, for example, 0 line or 3 lines.
In this example, the chip calculates the time point when the LCDC module starts to start the DPI interface and sends the vsync_out signal to the MIPI TX module without considering the LT, and then calculates t2_3 according to the LT, so as to fully consider the calculation error and improve accuracy and stability.
In this possible example, the determining 2_3' according to the LN, the vsa_lcdc, the vbp_lcdc, and the time point t0_3 when the MIPI RX module receives the third vsync_in of the AP includes:
determining that the time point from the moment of the LCDC module passing through the T0_3 to the moment of actually receiving the effective data is T0_3+LN;
determining that a time point from the moment of the LCDC module passing the t0_3 to the moment of actually transmitting valid data is t0+Δt+tvsa_lcd+tvbp_lcd, wherein tvsa_lcd is a blanking parameter of the LCDC module in a vertical direction, that is, the LCDC module starts actually outputting valid image data from starting to the moment of passing (tvsa_lcd+tvbp_lcd), the tvsa_lcd is a time when the LCDC module transmits one line of image data, tvsa_lcd is determined according to the vsa_lcdc, tvbp_lcd is determined according to the vbp_lcdc, Δt is a time difference between the t0_3 and t2_3', and t2_3' is a time when the LCDC module starts transmitting a signal to the dpilt_out without considering the LCDC module;
the mathematical expression that determines that the timing of the LCDC module receiving image data and transmitting image data is to be consistent is the following target formula:
Said t0_3+ said ln=said t0_3+ said Δt+ said vsa_lcd+ said vbp_lcd;
determining said t2_3' =said t0_3+said LN- (said vsa_lcd+said vbp_lcd) according to said target formula and said calculation formula of Δt.
Wherein the t2_3=t2_3 '+ the lt=t0_3+the LN- (the vsa_lcd+the vbp_lcd) +the LT is determined from the LT and the t2_3'.
In a specific implementation, the tvsa_lcd is determined according to the vsa_lcdc, specifically, obtained by using the time of transmitting a line of image data by the vsa_lcdc x LCDC module, and the tvbp_lcd is determined according to the vbp_lcdc, specifically, obtained by using the time of transmitting a line of image data by the vbp_lcdc x LCDC module; the time for the LCDC module to transmit a line of image data is calculated as follows:
determining an input total line number vtotal_lcdc=vsa_lcdc+vbp_lcdc+vactive_lcdc+vfp_lcdc of the single-frame image data received by the chip LCDC module, vactive_lcdc representing a line number of a vertical effective pixel in the single-frame image data received in the LCDC control timing, vfp_lcdc representing a line number of a vertical front shoulder pixel in the LCDC control timing;
determining that the time of receiving single-frame image data by the LCDC module is Ttotal, namely LF;
The LCDC module is determined to transmit one line of image data at Ttotal/vtotal_lcdc, and "/" indicates division.
The outputting of the effective image data refers to outputting pixel data which can be displayed on a display screen of the display module.
As can be seen, in this example, since t0_3+ln is the time identified from the receiving end control timing of the chip LCDC module, t0_3+Δt+vsa_lcd+vbp_lcd is the time identified from the transmitting end control timing of the chip LCDC module, so that the input pixel data and the output pixel data can be balanced when the chip LCDC module is applied, the above-mentioned pair of times Ji Jishi is the same, while the calculation and other errors are fully considered to improve the accuracy.
In one possible example, the LN is greater than or equal to (the vsa_lcdc+the vbp_lcdc).
In one possible example, the LN is equal to a sum of v_blank and a datapath delay, where v_blank is used to represent a receiving period of line pixel data where multiple lines of pixel data in the image data sent by the AP will be blanked in a horizontal direction, the blanking refers to hidden display of the display module, the v_blank is equal to a sum of a length vsa_in of a frame synchronization signal at a receiving end and a back shoulder vbp_in of the frame synchronization signal at the receiving end, and the datapath delay is used to represent an interval duration between a time point when the MIPI RX module receives the image data and a time point when the LCDC module actually acquires the image data; and, in addition, the method comprises the steps of,
The LN is greater than (the VSA_LCDC+the VBP_LCDC) and is used to restrict the LCDC module from starting a DPI interface, which is a data transmission interface between the LCDC module and the MIPI TX module, after the T0_3.
In this example, because LN is greater than or equal to (vsa_lcdc+vbp_lcdc), that is, the duration of receiving blanking data by the receiving end of the LCDC module is enough to cover the duration of sending corresponding blanking data by the LCDC, after T0_3, the time point when the frame start instruction is received by the LCDC module is ensured to precede the time point when valid data is sent by the LCDC module, and the time interval is not too long, the input pixel data and the output pixel data can be kept balanced during chip application, and the screen end can normally display during long-time transmission.
Referring to fig. 3, fig. 3 is a timing chart of an image data transmission control method in video mode according to an embodiment of the present application, as shown in fig. 3, in which an upper timing chart indicates a control timing of a chip receiving end (MIPI RX module+vpre module+vidc module), a lower timing chart indicates a control timing of a chip LCDC module,
"MIPI RX receive AP Vsync" means: MIPI RX receives the Vsync signal from AP;
"VIDC frame startto LDCD" means: the VIDC module sends a frame start signal instructing the LCDC module to start the DPI interface (interface between the LCDC module and MIPI TX) and begin transmitting image data;
"LCDC Start DryRun" means: the LCDC module starts to run;
"LCDC Send validdata" means: the LCDC module sends effective data to the MIPI TX module;
"LCDC StartVsync to MIPI TX" means: the LCDC module sends a Vsync signal to the MIPI TX module;
in the control time sequence expressed by the legend, in a first period corresponding to a first frame, the VPRE module counts the interval duration LF between a time point t0_1 of receiving a first receiving end frame synchronization signal vsync_in sent by an AP and a time point t0_2 of receiving a second vsync_in through an IPI interface, and determines a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI Lane rate and the DPI OSC according to the Tvact_in; wherein the first period refers to a period between t0_1 and t0_2;
in a second period corresponding to the second frame image data, the VPRE module determines an interval duration LN between t0_2 and t1_2, where t1_2 is a time point when the LCDC module receives the frame start signal sent by the VIDC module in the current period; determining a vertical synchronous pixel row number VSA_LCDC and a vertical back shoulder pixel row number VBP_LCDC in a control time sequence of the LCDC module according to the configuration information of the display module; and determining t2_3 according to the LN, vsa_lcdc, vbp_lcdc, LT, and a time point t0_3 when the MIPI RX module receives the third vsync_in of the AP, where t2_3 is a time point when the LCDC module starts to start the DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module in a third period in which the chips receive and process the third frame image data, and the vsync_out signal is used to instruct the MIPI TX module to start transmitting the image data; and transmitting t2_3 to the LCDC module; wherein the second period refers to a period between T0_2 and T0_3, the third period refers to a period between T0_3 and T0_4, and T0_4 is a point in time when the VPRE module receives Vsync_in sent by the AP for the fourth time
And, in a third period corresponding to a third frame image, the LCDC module receives t2_3 and transmits vsync_out to the MIPITX module in t2_3; and the MIPI TX module receives Vsync_out and starts to send the third frame of image data to the display module.
It should be noted that, in the first period and the second period, after the LCDC module receives the frame start signal of the VIDC module, it does not respond to the operation of sending data to the MIPI TX module, and starts to actually send data to the MIPI TX module from the corresponding time node (t1_3+lt) in the third period.
Where if LN > =vsa+vbp, the delay of the empty row+gyrfalcon data path sent by the AP is enough to cover vsa+vbp. The start time point of LCDC DPI is after T0, t2_3=ln- (vsa+vbp) +lt.
In this possible example, the LCDC module, after the t2_3 sends the vsync_out to the MIPI RX module, the method further includes: the LCDC module starts the DPI interface and switches the working mode of the DPI interface from a current low-power-consumption LP mode to a high-speed HS mode.
In one possible example, the DPI OSC is 300MHz.
It can be seen that in this example, considering that the first frame and the second frame LF/LN measurements are based on configured LCDC clock cycle counts, initializing the DPI clock fixes 300MHz, fixes the highest clock, the accuracy is more advantageous for counting, and for adapting the control timing for subsequent dynamics.
In one possible example, the duration of the first period, the second period, and the third period are the same.
In one possible example, the duration of LN1 in the first period, LN2 in the second period, and LN3 in the third period are the same, where LN2 is the LN, LN1 is the interval duration between t0_1 and t1_1, t1_1 is the time point when the LCDC module receives the frame start signal sent by the VIDC module in the first period, LN3 is the interval duration between t0_3 and t1_3, and t1_3 is the time point when the LCDC module receives the frame start signal sent by the VIDC module in the third period.
The method and the system are convenient for upgrading and maintaining software developers, technical support engineers and after-sales service engineers to better understand software processes in the process of customer projects and service, and can more quickly and effectively process problems encountered in the process of project debugging.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the bridge chip, in order to implement the above-described functions, comprises corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may divide the functional units of the chip according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
The embodiment of the application provides a chip, which is applied to an image data processing chipset, wherein the chip comprises a mobile industry processor interface, a MIPI RX module, a video preprocessing VPRE module, an image processing VIDC module, an image display processing module LCDC module and a MIPI TX module, wherein the MIPI RX module is connected with the VPRE module, the VPRE module is connected with the VIDC module, the VIDC is connected with the LCDC, and the LCDC module is connected with the MIPI TX module.
The embodiment of the application may divide the functional modules of the chip according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. The division of the modules in the embodiment of the present application is schematic, which is merely a logic function division, and other division manners may be implemented in practice.
Fig. 1 shows a possible schematic structure of chips involved in the above-described embodiment in the case of dividing respective functional blocks with corresponding respective functions. As shown in fig. 1, the chip 100 includes a mobile industry processor interface receiving MIPI RX module 101, a Video pre-processing VPRE module 102, an image processing VIDC module 103, an image display processing module LCDC module 104, and a MIPI TX module 105, the MIPI RX module 101 is connected to the VPRE module 102, the VPRE module 102 is connected to the VIDC module 103, the VIDC module 103 is connected to the LCDC module 104, the LCDC module 104 is connected to the MIPI TX module 105, and the Video mode means that the output mode of the chip is a Video mode, wherein,
the LCDC module 104 is configured to initialize default parameters of the configuration chip, where the default parameters include a display interface DSI channel lane rate, a display pixel interface DPI clock frequency OSC, and a buffer line number threshold LT, where the LT is configured to maintain the following states: after the LCDC module 104 sends the output end frame synchronization vsync_out signal to the MIPI TX module 105, valid data in an internal buffer of the LCDC module 104 may be transmitted to the display module by the MIPI TX module 105, where the valid data in the internal buffer is used to compensate measurement and calculation errors of the LCDC module 104;
In a first period of time during which the chip 100 receives and processes the first frame image data, the VPRE module 102 is configured to count, through an IPI interface, an interval duration LF between a time point t0_1 at which the first receiving end frame synchronization signal vsync_in sent by the AP is received and a time point t0_2 at which the second vsync_in is received, and determine a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI lane rate, the DPI OSC according to the tvact_in; wherein the first period refers to a period between the t0_1 and the t0_2;
during a second period in which the chip 100 receives and processes second frame image data, the VPRE module 102 is further configured to determine an interval length LN between the t0_2 and t1_2, where t1_2 is a time point when the LCDC module 104 receives a frame start signal sent by the VIDC module 103 during a current period, where the frame start signal is used to instruct the LCDC module to start and start transmitting image data; determining a vertical synchronous pixel line number VSA_LCDC and a vertical back shoulder pixel line number VBP_LCDC in a control time sequence of the LCDC module 104 according to the configuration information of the display module; and determining t2_3 from the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module 101 receives a third vsync_in of the AP, wherein t2_3 is a time point when the LCDC module 104 starts to start a DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module 105 during a third period when the chip receives and processes third frame image data, the vsync_out signal being used for instructing the MIPI TX module 105 to start transmitting image data; and sending the t2_3 to the LCDC module 104; wherein the second period refers to a period between t0_2 and t0_3, the third period refers to a period between t0_3 and t0_4, and t0_4 is a time point when the VPRE module 102 receives vsync_in sent by the AP for the fourth time
During the third period, the LCDC module 104 is further configured to receive the t2_3 and send the vsync_out to the MIPI TX module 105 at the t2_3; and, the MIPI TX module 105 receives the vsync_out and starts to transmit the third frame image data to the display module.
In one possible example, the determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and the time point t0_3 when the MIPI RX module receives the third vsync_in of the AP, the VPRE module 102 is specifically configured to: determining t2_3 'from the LN, the vsa_lcdc, the vbp_lcdc, and a time point t0_3 when the MIPI RX module 101 receives the third vsync_in of the AP, the t2_3' being a time point when the LCDC module starts to start up a DPI interface without considering the LT and transmits the vsync_out signal to the MIPI TX module 105; determining said t2_3=said t2_3'+ said LT from said LT and said t2_3'.
In one possible example, the determining 2_3' according to the LN, the vsa_lcdc, the vbp_lcdc, and the time point t0_3 when the MIPI RX module receives the third vsync_in of the AP includes: determining that the time point from the moment of the LCDC module passing through the T0_3 to the moment of actually receiving the effective data is T0_3+LN; determining that a time point from the moment of the LCDC module passing the t0_3 to the moment of actually transmitting valid data is t0+Δt+tvsa_lcd+tvbp_lcd, wherein tvsa_lcd is a blanking parameter of the LCDC module in a vertical direction, that is, the LCDC module starts actually outputting valid image data from starting to the moment of passing (tvsa_lcd+tvbp_lcd), the tvsa_lcd is a time when the LCDC module transmits one line of image data, tvsa_lcd is determined according to the vsa_lcdc, tvbp_lcd is determined according to the vbp_lcdc, Δt is a time difference between the t0_3 and t2_3', and t2_3' is a time when the LCDC module starts transmitting a signal to the dpilt_out without considering the LCDC module; the mathematical expression that determines that the timing of the LCDC module receiving image data and transmitting image data is to be consistent is the following target formula:
Said t0_3+ said ln=said t0_3+ said Δt+ said vsa_lcd+ said vbp_lcd;
determining said t2_3' =said t0_3+said LN- (said vsa_lcd+said vbp_lcd) according to said target formula and said calculation formula of Δt.
In one possible example, the LN is greater than or equal to (the vsa_lcdc+the vbp_lcdc).
In one possible example, the LN is equal to a sum of v_blank and a datapath delay, where v_blank is used to represent a receiving period of line pixel data where multiple lines of pixel data in the image data sent by the AP will be blanked in a horizontal direction, the blanking refers to hidden display of the display module, the v_blank is equal to a sum of a length vsa_in of a frame synchronization signal at a receiving end and a back shoulder vbp_in of the frame synchronization signal at the receiving end, and the datapath delay is used to represent an interval duration between a time point when the MIPI RX module receives the image data and a time point when the LCDC module actually acquires the image data; and, the LN is greater than (the vsa_lcdc+the vbp_lcdc) to restrict a point in time when the LCDC module starts a DPI interface, which is a data transmission interface between the LCDC module and the MIPI TX module, after the t0_3.
In one possible example, after the LCDC module sends the vsync_out to the MIPI RX module by the t2_3, the LCDC module 104 is specifically further configured to: the LCDC module 104 initiates the DPI interface and switches the operation mode of the DPI interface from the current low power LP mode to a high speed HS mode.
In one possible example, the DPI OSC is 300MHz.
In one possible example, the duration of the first period, the second period, and the third period are the same.
In one possible example, the duration of LN1 in the first period, LN2 in the second period, and LN3 in the third period are the same, where LN2 is the LN, LN1 is the interval duration between t0_1 and t1_1, t1_1 is the time point when the LCDC module receives the frame start signal sent by the VIDC module in the first period, LN3 is the interval duration between t0_3 and t1_3, and t1_3 is the time point when the LCDC module receives the frame start signal sent by the VIDC module in the third period.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed method, apparatus, and system may be implemented in other manners. For example, the device embodiments described above are merely illustrative; for example, the division of the units is only one logic function division, and other division modes can be adopted in actual implementation; for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may be physically included separately, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
Although the present invention is disclosed above, the present invention is not limited thereto. Variations and modifications, including combinations of the different functions and implementation steps, as well as embodiments of the software and hardware, may be readily apparent to those skilled in the art without departing from the spirit and scope of the invention.

Claims (10)

1. The image data transmission control method in the Video mode is characterized in that the method is applied to a chip in an image data processing chip set, the image data processing chip set comprises an application processor AP, the chip and a display module, the AP is in communication connection with the chip, the chip is in communication connection with the display module, the chip comprises a mobile industry processor interface receiving MIPI RX module, a Video preprocessing VPRE module, an image processing VICC module, an image display processing module LCDC module and a MIPI TX module, the MIPI RX module is connected with the VPRE module, the VPRE module is connected with the VICC module, the VICC is connected with the LCDC, the LCDC module is connected with the MIPI TX module, and the Video mode refers to the output mode of the chip being a Video mode; the method comprises the following steps:
The LCDC module initializes default parameters of configuration chips, including a display interface DSI channel lane rate, a display pixel interface DPI clock frequency OSC, and a buffer line number threshold LT, the LT being configured to maintain: after the LCDC module sends an output end frame synchronization Vsync_out signal to the MIPI TX module, effective data in an internal buffer memory of the LCDC module can be transmitted to the display module by the MIPI TX module, and the effective data in the internal buffer memory is used for compensating measurement and calculation errors of the LCDC module;
in a first period of time during which the chips receive and process the first frame of image data, the VPRE module counts, through an IPI interface, an interval duration LF between a time point t0_1 at which a first receiving end frame synchronization signal vsync_in sent by the AP is received and a time point t0_2 at which a second vsync_in is received, and determines a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI lane rate, the DPI OSC according to the tvact_in; wherein the first period refers to a period between the t0_1 and the t0_2;
in a second period of time during which the chips receive and process second frame image data, the VPRE module determines an interval duration LN between the t0_2 and the t1_2, where t1_2 is a time point when the LCDC module receives a frame start signal sent by the VIDC module in a current period of time, where the frame start signal is used to instruct the LCDC module to start and start transmitting image data; determining a vertical synchronous pixel line number VSA_LCDC and a vertical back shoulder pixel line number VBP_LCDC in a control time sequence of the LCDC module according to the configuration information of the display module; and determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module receives a third vsync_in of the AP, wherein the t2_3 is a time point when the LCDC module starts to start a DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module within a third period when the chip receives and processes third frame image data, and the vsync_out signal is used for indicating the MIPI TX module to start transmitting image data; and sending the t2_3 to the LCDC module; wherein the second period refers to a period between the t0_2 and the t0_3, the third period refers to a period between the t0_3 and the t0_4, and the t0_4 is a time point when the VPRE module receives the vsync_in sent by the AP for the fourth time;
During the third period, the LCDC module receives the t2_3 and sends the vsync_out to the MIPI TX module at the t2_3; and the MIPI TX module receives the Vsync_out and starts to send the third frame image data to the display module.
2. The method of claim 1, wherein the determining t2_3 from the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module receives the third vsync_in of the AP comprises:
determining t2_3 'from the LN, the vsa_lcdc, the vbp_lcdc, and a time point t0_3 when the MIPI RX module receives the third vsync_in of the AP, the t2_3' being a time point when the LCDC module starts to start up a DPI interface and transmits the vsync_out signal to the MIPI TX module without considering the LT;
determining said t2_3=said t2_3'+ said LT from said LT and said t2_3'.
3. The method of claim 2, wherein the determining 2_3' from the LN, the vsa_lcdc, the vbp_lcdc, and a time point t0_3 when the MIPI RX module receives the third vsync_in of the AP comprises:
Determining that the time point from the moment of the LCDC module passing through the T0_3 to the moment of actually receiving the effective data is T0_3+LN;
determining that a time point from the moment of the LCDC module passing the t0_3 to the moment of actually transmitting valid data is t0+Δt+tvsa_lcd+tvbp_lcd, wherein tvsa_lcd is a blanking parameter of the LCDC module in a vertical direction, that is, the LCDC module starts actually outputting valid image data from starting to the moment of passing (tvsa_lcd+tvbp_lcd), the tvsa_lcd is a time when the LCDC module transmits one line of image data, tvsa_lcd is determined according to the vsa_lcdc, tvbp_lcd is determined according to the vbp_lcdc, Δt is a time difference between the t0_3 and t2_3', and t2_3' is a time when the LCDC module starts transmitting a signal to the dpilt_out without considering the LCDC module;
the mathematical expression that determines that the timing of the LCDC module receiving image data and transmitting image data is to be consistent is the following target formula:
said t0_3+ said ln=said t0_3+ said Δt+ said vsa_lcd+ said vbp_lcd;
Determining said t2_3' =said t0_3+said LN- (said vsa_lcd+said vbp_lcd) according to said target formula and said calculation formula of Δt.
4. A method according to any one of claims 1-3, wherein said LN is greater than or equal to (said vsa_lcdc+said vbp_lcdc).
5. A method according to any one of claims 1 to 3, wherein LN is equal to a sum of v_blank and datapath delay, the v_blank is used for representing a receiving period of a line of pixel data in which a plurality of lines of pixel data in the image data sent by the AP will be blanked in a horizontal direction, the blanking refers to hidden display of the display module, the v_blank is equal to a sum of a length vsa_in of a frame synchronization signal at a receiving end and a back shoulder vbp_in of a frame synchronization signal at the receiving end, and the datapath delay is used for representing an interval period between a time point when the MIPI RX module receives the image data and a time point when the LCDC module actually acquires the image data; and, in addition, the method comprises the steps of,
the LN is greater than (the VSA_LCDC+the VBP_LCDC) and is used to restrict the LCDC module from starting a DPI interface, which is a data transmission interface between the LCDC module and the MIPI TX module, after the T0_3.
6. The method of claim 5, wherein the LCDC module, after the t2_3 sends the vsync_out to the MIPI RX module, further comprises:
the LCDC module starts the DPI interface and switches the working mode of the DPI interface from a current low-power-consumption LP mode to a high-speed HS mode.
7. A method according to any of claims 1-3, wherein the DPI OSC is 300MHz.
8. A method according to any one of claims 1-3, wherein the duration of the first period, the second period, and the third period are the same.
9. The method of any one of claims 1-3, wherein a duration of LN1 in the first period, LN2 in the second period, and LN3 in the third period are the same, the LN2 is the LN, LN1 is a duration of an interval between t0_1 and t1_1, t1_1 is a time point when the LCDC module receives the frame start signal sent by the VIDC module in the first period, LN3 is a duration of an interval between t0_3 and t1_3, and t1_3 is a time point when the LCDC module receives the frame start signal sent by the VIDC module in the third period.
10. A chip is characterized in that the chip is a chip, the chip is applied to an image data processing chip set, the image data processing chip set comprises an application processor AP, the chip and a display module, the AP is in communication connection with the chip, the chip is in communication connection with the display module,
the chip comprises a mobile industry processor interface, a MIPI RX module, a Video preprocessing VPRE module, an image processing VIDC module, an image display processing module LCDC module and a MIPI TX module, wherein the MIPI RX module is connected with the VPRE module, the VPRE module is connected with the VIDC module, the VIDC is connected with the LCDC, the LCDC module is connected with the MIPI TX module, and the output mode of the chip is a Video mode; wherein, the liquid crystal display device comprises a liquid crystal display device,
the LCDC module is configured to initialize default parameters of the configuration chip, where the default parameters include a display interface DSI channel lane rate, a display pixel interface DPI clock frequency OSC, and a buffer line number threshold LT, where the LT is configured to maintain the following states: after the LCDC module sends an output end frame synchronization Vsync_out signal to the MIPI TX module, effective data in an internal buffer memory of the LCDC module can be transmitted to the display module by the MIPI TX module, and the effective data in the internal buffer memory is used for compensating measurement and calculation errors of the LCDC module;
In a first period of time during which the chips receive and process the first frame image data, the VPRE module is configured to count, through an IPI interface, an interval duration LF between a time point t0_1 at which the first receiving end frame synchronization signal vsync_in sent by the AP is received and a time point t0_2 at which the second vsync_in is received, and determine a receiving end single frame valid data processing duration tvact_in according to the LF; and updating the DSI lane rate, the DPI OSC according to the tvact_in; wherein the first period refers to a period between the t0_1 and the t0_2;
in a second period of time during which the chips receive and process second frame image data, the VPRE module is further configured to determine an interval duration LN between the t0_2 and t1_2, where t1_2 is a time point when the LCDC module receives a frame start signal sent by the VIDC module in a current period of time, where the frame start signal is used to instruct the LCDC module to start and start transmitting image data; determining a vertical synchronous pixel line number VSA_LCDC and a vertical back shoulder pixel line number VBP_LCDC in a control time sequence of the LCDC module according to the configuration information of the display module; and determining t2_3 according to the LN, the vsa_lcdc, the vbp_lcdc, the LT, and a time point t0_3 when the MIPI RX module receives a third vsync_in of the AP, wherein the t2_3 is a time point when the LCDC module starts to start a DPI interface and sends an output end frame synchronization vsync_out signal to the MIPI TX module within a third period when the chip receives and processes third frame image data, and the vsync_out signal is used for indicating the MIPI TX module to start transmitting image data; and sending the t2_3 to the LCDC module; wherein the second period refers to a period between the t0_2 and the t0_3, the third period refers to a period between the t0_3 and the t0_4, and the t0_4 is a time point when the VPRE module receives the vsync_in sent by the AP for the fourth time;
The LCDC module is further configured to receive the t2_3 and send the vsync_out to the MIPI TX module at the t2_3 during the third period; and the MIPI TX module receives the Vsync_out and starts to send the third frame image data to the display module.
CN202310122699.8A 2023-02-16 2023-02-16 Image data transmission control method in video mode Active CN115834793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310122699.8A CN115834793B (en) 2023-02-16 2023-02-16 Image data transmission control method in video mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310122699.8A CN115834793B (en) 2023-02-16 2023-02-16 Image data transmission control method in video mode

Publications (2)

Publication Number Publication Date
CN115834793A CN115834793A (en) 2023-03-21
CN115834793B true CN115834793B (en) 2023-04-25

Family

ID=85521594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310122699.8A Active CN115834793B (en) 2023-02-16 2023-02-16 Image data transmission control method in video mode

Country Status (1)

Country Link
CN (1) CN115834793B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055779B (en) * 2023-03-29 2024-03-19 深圳曦华科技有限公司 Video mode chip data stream transmission time sequence control method and device
CN116030748B (en) * 2023-03-30 2023-08-08 深圳曦华科技有限公司 Method and device for dynamically adjusting chip clock frequency
CN116052578B (en) * 2023-03-31 2023-08-04 深圳曦华科技有限公司 Method and device for synchronously controlling chip input and output in display chip system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101670446B1 (en) * 2016-07-26 2016-10-28 (주)큐브이미징시스템즈 Camera image real time processing apparatus and method thereof
CN115550709B (en) * 2022-01-07 2023-09-26 荣耀终端有限公司 Data processing method and electronic equipment
CN114090500B (en) * 2022-01-13 2022-04-12 南京初芯集成电路有限公司 All-pass image processing SOC chip and image processing method

Also Published As

Publication number Publication date
CN115834793A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN115834793B (en) Image data transmission control method in video mode
JP6401716B2 (en) Synchronous signal processing method and apparatus for stereoscopic display of splice screen, splice screen
US6831648B2 (en) Synchronized image display and buffer swapping in a multiple display environment
CN115831032B (en) Chip temperature drift treatment method and device
US7634604B2 (en) Systems for generating synchronized events and images
CN101491090B (en) Method and apparatus for synchronizing display streams
CN102272825B (en) Timing controller capable of switching between graphics processing units
US6809733B2 (en) Swap buffer synchronization in a distributed rendering system
CN105657364A (en) Display method, device and system for image processor
CN106251825A (en) Techniques for aligning frame data
KR20170016255A (en) Data transmitter apparatus for changing a clock signal in runtime and Data interface system including the same
KR100744135B1 (en) Display driving integrated circuit and system clock generation method generating system clock signal using oscillator's clock signal
CN109119012A (en) Starting up's method and circuit
CN116055779B (en) Video mode chip data stream transmission time sequence control method and device
CN112187225A (en) Clock calibration method and device
US6791551B2 (en) Synchronization of vertical retrace for multiple participating graphics computers
EP3321925B1 (en) Information processing device, display control program, and display control method
TWI435090B (en) Testing structure for shutter glasses, method and system utilizing the same
CN103838533A (en) Synchronization method for graph signals in tiled display system of computer cluster and synchronization card
CN109147650B (en) Display synchronization control method and device and display screen control system
CN110390909B (en) Synchronous control circuit for display
KR20050117941A (en) Multi-display system and method of controlling the same
CN115100996B (en) Display configuration circuit, method, micro display panel and electronic device
CN104835434A (en) Signal generation device
CN111629119B (en) MIPI data processing method, device, storage medium and display terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant