CN116506562B - Video display method and system based on multiple channels - Google Patents

Video display method and system based on multiple channels Download PDF

Info

Publication number
CN116506562B
CN116506562B CN202310764216.4A CN202310764216A CN116506562B CN 116506562 B CN116506562 B CN 116506562B CN 202310764216 A CN202310764216 A CN 202310764216A CN 116506562 B CN116506562 B CN 116506562B
Authority
CN
China
Prior art keywords
brightness
pixel
video
matrix
video display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310764216.4A
Other languages
Chinese (zh)
Other versions
CN116506562A (en
Inventor
王志欣
王勇
黎启
田福鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Menyaoshi Technology Co ltd
Original Assignee
Shenzhen Menyaoshi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Menyaoshi Technology Co ltd filed Critical Shenzhen Menyaoshi Technology Co ltd
Priority to CN202310764216.4A priority Critical patent/CN116506562B/en
Publication of CN116506562A publication Critical patent/CN116506562A/en
Application granted granted Critical
Publication of CN116506562B publication Critical patent/CN116506562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/22Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources
    • G09G3/30Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels
    • G09G3/32Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a video display method and a system based on multiple channels, which are characterized in that a video source object for reading video monitoring data of each video source, a frame vector containing a plurality of Mat objects and used for reading data frames of the video monitoring data and a brightness matrix used for reading brightness values of the Mat objects are constructed, the data frames of the video monitoring data are read from the video source and written into the Mat objects, the Mat objects are traversed circularly to extract the brightness values of each pixel and write into the brightness matrix, the distribution quantity of the pixels falling into each brightness distribution interval in the Mat objects is counted by utilizing the brightness matrix to process the pixel data in the Mat objects, and then display pictures in corresponding video display areas in multiple channels of video display windows are updated, so that the brightness of the monitoring pictures of the multiple video sources can be reasonably controlled to ensure the visibility of each monitoring picture, and the bad influence of the monitoring pictures with uneven brightness on the vision of monitoring staff is reduced.

Description

Video display method and system based on multiple channels
Technical Field
The application relates to the technical field of video monitoring, in particular to a video display method and system based on multiple channels.
Background
Multichannel video surveillance refers to a technique of simultaneously monitoring multiple video sources, which may come from different cameras or other video devices, such as webcams, digital video recorders, etc. The multi-channel video monitoring is widely applied to security monitoring in public places, commercial buildings, residential areas and the like, and is widely applied to monitoring in traffic places such as roads, bridges, tunnels and the like, and monitoring in industrial places such as factories, warehouses and the like. In order to make monitoring personnel more convenient to know the condition of a monitoring area, multi-channel video monitoring generally improves monitoring efficiency by combining a plurality of video sources on one screen. With the development of liquid crystal display technology and LED display technology and the progress of display screen manufacturing technology, the price of various electronic screens is lower and lower, and independent or spliced large-size screens are visible everywhere, in the field of video monitoring, it is also very common to use more than 100 inches of independent or spliced display screens for simultaneously displaying video monitoring data of multiple video sources. However, although displaying video monitoring data of a plurality of video sources on one screen or a large-size electronic screen formed by splicing a plurality of screens brings great convenience to monitoring personnel, due to brightness variation of monitoring places such as indoor and outdoor of different video sources, brightness variation caused by factors such as movement variation of monitoring equipment and objects in a monitoring environment, even if the area of the screen is large enough and gaps are left among video pictures of the plurality of video sources, the brightness variation still brings adverse effects to the monitoring personnel, including that the visibility of the monitoring picture with dark brightness is easy to ignore, and the monitoring personnel watch the monitoring picture with uneven brightness for a long time to cause visual impairment, etc.
Disclosure of Invention
The application is based on the problems, and provides a video display method and a video display system based on multiple channels, which can reasonably control the brightness of monitoring pictures of multiple video sources so as to ensure the visibility of each monitoring picture and reduce the adverse effect of the monitoring pictures with uneven brightness on the eyesight of monitoring staff.
In view of this, a first aspect of the present application proposes a video display method based on multiple channels, comprising:
constructing video surveillance data for reading each video sourceVideo source object->Wherein the method comprises the steps of,/>For the number of video sourcesAn amount of;
initializing the video source objects by using addresses of video sources so that each video source object is associated with the video source to acquire corresponding video monitoring data;
constructing a frame vector for reading a data frame of the video surveillance dataSaid frame vector->Comprises->Mat object for storing data frames of a corresponding video source>
Build for reading and writingLuminance matrix of luminance values of->Wherein->,/>A total number of pixels for each data frame;
creating a multi-channel video display window, the multi-channel video display window comprisingIndividual video display areas
Reading a layout configuration file of the multichannel video, wherein the layout configuration file comprises a layout code of each video source;
establishing a corresponding relation between the layout codes of each video source and the video display area;
configuring brightness distribution intervals of preset interval numberWherein->,/>The preset interval number is the preset interval number;
before receiving the end instruction, the following steps are circularly executed:
traversing each video source objectReading the data frame of the video monitoring data from the corresponding video source;
converting the data frame into YUV color coding format and writing into the frame vectorIs->
Cycle traversal
From the slaveY value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>
Using the brightness matrixStatistics of->Middle dropInto each brightness distribution interval->Number of pixel distributions in (a)
According to the distribution quantity of the pixelsFor->Processing the pixel data in the pixel array;
according toUpdating the corresponding video display area in the multi-channel video display window +.>Is displayed on the display screen.
Preferably, according to the number of pixel distributionsFor->The step of processing the pixel data in the step of processing the pixel data specifically comprises the following steps:
according to the distribution quantity of the pixelsIn the->Determining the minimum pixel difference luminance line in the pixel data in (1)>So that said->More than the minimum pixel difference brightness in the pixel data in (a)Line->The sum of the number of pixels is smaller than the minimum pixel difference luminance line +.>The difference between the number of pixels is minimal;
obtaining brightness reference line
According to the minimum pixel difference brightness lineAnd the brightness reference line->Modification->Y value in YUV color coding of each pixel.
Preferably, according to the number of pixel distributionsIn the->The step of determining the minimum pixel difference luminance line in the pixel data in (a) specifically includes:
initializing a temporary pixel difference value:
initializing a minimum pixel difference luminance line:
at 1 to 1Cycle-to-cycle traversal/>Values to perform the following steps:
calculating a temporary pixel difference value:
when (when)When in use, make->Continuing to execute the cycle;
when (when)When calculating the minimum pixel difference brightness line:
,
and ends the traversalCycling of values;
when (when)At this time, a first temporary pixel difference value is calculated:
,
calculating a second temporary pixel difference value:
when (when)When calculating the minimum pixel difference brightness line:
,
and ends the traversalCycling of values;
when (when)When calculating the minimum pixel difference brightness line:
,
and ends the traversalCycling of values.
Preferably, the brightness line is based on the minimum pixel differenceAnd the brightness reference line->Modification->The step of Y-value in YUV color coding of each pixel specifically comprises:
calculating the minimum pixel difference brightness lineAnd the brightness reference line->Is a luminance difference value of (a):
letting the brightness matrixIs>Each luminance value in a row minus +.>Obtain a new brightness matrix->
Will beY value in YUV color coding of each pixel is modified to the luminance matrix +.>First->The brightness value corresponding to the row.
Preferably, the brightness line is based on the minimum pixel differenceAnd the brightness reference line->Modification->The step of Y-value in YUV color coding of each pixel specifically comprises:
calculating the minimum pixel difference brightness lineAnd the brightness reference line->Is a luminance difference value of (a):
traversing the lightDegree matrixIs>Each luminance value in a row:
when (when)When calculating the matrix brightness difference value:
when (when)When calculating the matrix brightness difference value:
letting the brightness matrixIs>Line>Subtracting +.>Obtain a new brightness matrix->
Will beY value in YUV color coding of each pixel is modified to the luminance matrix +.>First->The brightness value corresponding to the row.
Preferably, a brightness reference line is obtainedThe method specifically comprises the following steps:
reading ambient brightness data from an ambient brightness sensor and converting the ambient brightness data into ambient brightness values
Obtaining an upper bound of an ambient brightness value rangeAnd lower bound->
Calculating the brightness reference line:
preferably, in the cycle traversalAfter the step of (a), further comprising:
acquiring an environment image in front of a display device for displaying a multi-channel video display window from an image pickup device;
determining whether monitoring personnel watching the display device exist in the environment image through face recognition;
skipping the slave when no monitoring person watching the display device exists in the environment imageY value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>And according to the number of pixel distributions +.>For->A step of processing the pixel data in the step;
according to direct executionUpdating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
Preferably, after the step of determining whether there is a monitoring person who is viewing the display device in the environment image through face recognition, further includes:
determining the number of monitoring persons watching the display device
The number of monitoring persons who are watching the display deviceAt the time, execute from->Y value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->StatisticsFalls within each brightness distribution interval->The number of pixel distributions +.>According to the number of pixel distributions +.>For->Processing is performed on the pixel data according to +.>Updating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
Preferably, after the step of determining whether there is a monitoring person who is viewing the display device in the environment image through face recognition, further includes:
when there is a monitor person who is viewing the display device in the environment image and when the number of monitors who are viewing the display device is presentLocating the position of the line of sight focus of the monitoring person on the display device by the environment image;
In cyclic traversalAfter the step of (a), further comprising:
calculating the distance of the monitoring personnel according to the environment image;
acquiring a view field coverage corresponding to the distance, wherein the view field coverage is a display area range which is intersected with the surface of the display device and takes the view focus as a center point, and the preset view angle range of the monitoring personnel;
determination ofCorresponding video display area->
Judging the video display areaWhether to intersect the field of view coverage;
when the video display areaWhen not crossing the field coverage, skip from +.>Y value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>According to the pixel distribution quantityFor->A step of processing the pixel data in the step;
according to direct executionUpdating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
A second aspect of the present application proposes a multi-channel based video display system comprising a video source for acquiring video monitoring data of a monitoring location, a display device for displaying a multi-channel video display window, an ambient brightness sensor for acquiring ambient brightness data, an image capturing device for acquiring an ambient image directly in front of the display device, and a control device comprising a processor and a memory, the processor executing a computer program stored by the memory to implement the multi-channel based video display method according to any one of the first aspect of the present application.
The application provides a video display method and a system based on multiple channels, which are characterized in that a video source object for reading video monitoring data of each video source, a frame vector containing a plurality of Mat objects and used for reading data frames of the video monitoring data and a brightness matrix used for reading brightness values of the Mat objects are constructed, the data frames of the video monitoring data are read from the video source and written into the Mat objects, the Mat objects are traversed circularly to extract the brightness values of each pixel and write into the brightness matrix, the distribution quantity of the pixels falling into each brightness distribution interval in the Mat objects is counted by utilizing the brightness matrix to process the pixel data in the Mat objects, and then display pictures in corresponding video display areas in multiple channels of video display windows are updated, so that the brightness of the monitoring pictures of the multiple video sources can be reasonably controlled to ensure the visibility of each monitoring picture, and the bad influence of the monitoring pictures with uneven brightness on the vision of monitoring staff is reduced.
Drawings
FIG. 1 is a flow chart of a video display method based on multiple channels according to an embodiment of the present application;
fig. 2 is a statistical chart of the distribution number of pixels falling into each brightness distribution interval in a Mat object in a multi-channel video display method according to an embodiment of the present application;
fig. 3 is an image of a data frame of video monitoring data corresponding to Mat [1] in fig. 2 in a multi-channel-based video display method according to an embodiment of the present application;
FIG. 4 is a view coverage of a monitor person while only one monitor person is watching the display device in a multi-channel video display method according to an embodiment of the present application;
fig. 5 is a schematic block diagram of a video display system based on multiple channels according to an embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced otherwise than as described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
In the description of the present application, the term "plurality" means two or more, unless explicitly defined otherwise, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, merely for convenience of description of the present application and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present application. The terms "coupled," "mounted," "secured," and the like are to be construed broadly, and may be fixedly coupled, detachably coupled, or integrally connected, for example; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of this specification, the terms "one embodiment," "some implementations," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
A video display method and system based on multiple channels according to some embodiments of the present application are described below with reference to the accompanying drawings.
As shown in fig. 1, a first aspect of the present application proposes a video display method based on multiple channels, including:
constructing video surveillance data for reading each video sourceVideo source object->Wherein the method comprises the steps of,/>A number of video sources; initializing the video source objects by using addresses of video sources so that each video source object is associated with the video source to acquire corresponding video monitoring data;
constructing a frame vector for reading a data frame of the video surveillance dataSaid frame vector->Comprises->Mat object for storing data frames of a corresponding video source>
Build for reading and writingLuminance matrix of luminance values of->Wherein->,/>A total number of pixels for each data frame;
creating a multi-channel video display window, the multi-channel video display window comprisingIndividual video display areas
Reading a layout configuration file of the multichannel video, wherein the layout configuration file comprises a layout code of each video source;
establishing a corresponding relation between the layout codes of each video source and the video display area;
configuring brightness distribution intervals of preset interval numberWherein->,/>The preset interval number is the preset interval number;
before receiving the end instruction, the following steps are circularly executed:
traversing each video source objectReading the data frame of the video monitoring data from the corresponding video source;
converting the data frame into YUV color coding format and writing into the frame vectorIs->
Cycle traversal
From the slaveY value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>
Using the brightness matrixStatistics of->Falls within each brightness distribution interval->Number of pixel distributions in (a)
According to the distribution quantity of the pixelsFor->Processing the pixel data in the pixel array;
according toUpdating the corresponding video display area in the multi-channel video display window +.>Is displayed on the display screen.
Preferably, the number of the preset intervals is greater than 10 and less than 100, and in some embodiments of the present application, the luminance range from 0 to 255 is divided into 20 luminance intervals, and Y values Fan Wei corresponding to each luminance interval are respectively:
【0,12.75】、【12.75,25.5】、【25.5,38.25】、【38.25,51】、【51,63.75】、【63.75,76.5】、【76.5,89.25】、【89.25,102】、【102,114.75】、【114.75,127.5】、【127.5,140.25】、【140.25,153】、【153,165.75】、【165.75,178.5】、【178.5,191.25】、【191.25,204】、【204,216.75】、【216.75,229.5】、【229.5,242.25】、【242.25,255】
the flowchart of the multi-channel video display method shown in fig. 1 includes two nested loops, where the first nested loop is an infinite loop, for example, using while (true) as a loop judgment condition, an external ending instruction is monitored in the loop to jump out of the loop, and the steps of reading video monitoring data from a video source for processing and displaying are repeatedly performed in the loop process. The second layer of nested loops is a limited loop, and the number of loops is equal to the number of video sources, namelyFor the circulation variable, will->After initialization to 1, p->Accumulating at +.>Repeating the steps in the second level nested loops whenAnd ending the second-layer nesting circulation, and returning to the embedded step of executing the first-layer nesting circulation. It should be noted that the loop variable +_ is in some embodiments different from programming habits>The value of (2) may be +.>I.e. in the second layer nested loop, will +.>After initialization to 0, p->Accumulating at +.>Repeating the steps in the second layer nested loop when +.>And ending the second-layer nesting circulation, and returning to the embedded step of executing the first-layer nesting circulation.
Fig. 2 shows a statistical plot of the number of pixel distributions falling within each brightness distribution interval in 3 Mat objects in the case of 3 video sources, from which it can be seen that the brightness distribution differences of different images can be very large. For example, mat [1] in FIG. 2 is a statistical plot of the pixel distribution number of a data frame in video monitoring data of a video source at night, and Mat [2] and Mat [3] are statistical plots of the pixel distribution number of a data frame in video monitoring data of an outdoor video source and an indoor video source at daytime, respectively. For convenience of presentation, the three pixel distribution quantity statistical diagrams in fig. 2 are subjected to a certain compression process in the pixel quantity direction, so that the three are not in the same proportion in the pixel quantity direction. Fig. 3 is an image of a data frame of video monitoring data corresponding to Mat [1], and it can be seen that, in video monitoring data at night, the image corresponding to each frame of data contains a large number of low-brightness pixels, so that a large number of pixels in a statistical chart of pixel distribution quantity are intensively distributed in a low-brightness region.
Preferably, according to the number of pixel distributionsFor->The step of processing the pixel data in the step of processing the pixel data specifically comprises the following steps:
according to the distribution quantity of the pixelsIn the->Determining a minimum pixel difference luminance line from among the pixel data in (a)/>So that said->A luminance line +_greater than the minimum pixel difference among the pixel data in (a)>And a luminance line less than the minimum pixel difference>The difference between the number of pixels is minimal;
obtaining brightness reference line
According to the minimum pixel difference brightness lineAnd the brightness reference line->Modification->Y value in YUV color coding of each pixel.
Preferably, according to the number of pixel distributionsIn the->The step of determining the minimum pixel difference luminance line in the pixel data in (a) specifically includes:
initializing a temporary pixel difference value:
initializing a minimum pixel difference luminance line:
at 1 to 1Cycle traversal between->Values to perform the following steps:
calculating a temporary pixel difference value:
when (when)When in use, make->Continuing to execute the cycle;
when (when)When calculating the minimum pixel difference brightness line:
,
and ends the traversalCycling of values;
when (when)At this time, a first temporary pixel difference value is calculated:
,
calculating a second temporary pixel difference value:
when (when)When calculating the minimum pixel difference brightness line:
,
and ends the traversalCycling of values;
when (when)When calculating the minimum pixel difference brightness line:
,
and ends the traversalCycling of values.
Specifically, NULL is a NULL value or a NULL pointer, and is initializedRefers to +.>A null pointer is allocated so that it does not point to any data in memory.
In other embodiments of the present application, algorithms such as a binary search method may be further used to quickly obtain the minimum pixel difference brightness line, which is not described herein.
Preferably, the brightness line is based on the minimum pixel differenceAnd the luminance reference line +.>Modification->The step of Y-value in YUV color coding of each pixel specifically comprises:
calculating the minimum pixel difference brightness lineAnd the brightness reference line->Is a luminance difference value of (a):
letting the brightness matrixIs>Line>Subtracting +.>Obtain a new brightness matrix->
Will beY value in YUV color coding of each pixel is modified to the luminance matrix +.>First->The brightness value corresponding to the row.
Specifically, in letting the brightness matrixIs>Each luminance value in a row minus +.>Obtain a new brightness matrix->In the step (a)>Is positive and the brightness matrix +.>Is>Any luminance value in a row is smaller than +.>When a new brightness matrix is made +.>Is>The corresponding luminance value in the row is 0. Likewise, when->When negative and the brightness matrix +.>Is>Any luminance value in a row minus +.>When the number of the particles is more than 255,let new luminance matrix->Is>The corresponding luminance value in the row is 255.
Preferably, the brightness line is based on the minimum pixel differenceAnd the brightness reference line->Modification->The step of Y-value in YUV color coding of each pixel specifically comprises:
calculating the minimum pixel difference brightness lineAnd the brightness reference line->Is a luminance difference value of (a):
traversing the luminance matrixIs>Each luminance value in a row:
when (when)When calculating the matrix brightness difference value:
when (when)When calculating the matrix brightness difference value:
letting the brightness matrixIs>Line>Subtracting +.>Obtain a new brightness matrix->
Will beY value in YUV color coding of each pixel is modified to the luminance matrix +.>First->The brightness value corresponding to the row.
Preferably, a brightness reference line is obtainedThe method specifically comprises the following steps:
reading ambient brightness data from an ambient brightness sensor and converting the ambient brightness data into ambient brightness values
Obtaining an upper bound of an ambient brightness value rangeAnd lower bound->
Calculating the brightness reference line:
in other embodiments of the present application, the luminance reference lineFor a pre-configured constant, in a multi-channel video display system integrated with an ambient brightness sensor, using brightness data of the ambient brightness sensor to the brightness reference line->And compensating. In a multi-channel video display system without an integrated luminance sensor, the luminance reference line is directly used +.>
Preferably, in the cycle traversalAfter the step of (a), further comprising:
acquiring an environment image in front of a display device for displaying a multi-channel video display window from an image pickup device;
determining whether monitoring personnel watching the display device exist in the environment image through face recognition;
skipping the slave when no monitoring person watching the display device exists in the environment imageExtracting Y value in YUV color coding of each pixelWriting the luminance matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>And according to the number of pixel distributions +.>For->A step of processing the pixel data in the step;
according to direct executionUpdating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
In the technical scheme of the embodiment, when no monitoring personnel watch the multi-channel video display window, the step of brightness processing of the video monitoring data is skipped, and the picture of the original video monitoring data is directly displayed so as to save processing resources.
Preferably, after the step of determining whether there is a monitoring person who is viewing the display device in the environment image through face recognition, further includes:
determining the number of monitoring persons watching the display device
The number of monitoring persons who are watching the display deviceAt the time, execute from->Y value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->StatisticsFalls within each brightness distribution interval->The number of pixel distributions +.>According to the number of pixel distributions +.>For->Processing is performed on the pixel data according to +.>Updating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
Preferably, after the step of determining whether there is a monitoring person who is viewing the display device in the environment image through face recognition, further includes:
when there is a monitor person who is viewing the display device in the environment image and when the number of monitors who are viewing the display device is presentWhen the monitoring personnel is positioned, the position of the sight focus of the monitoring personnel on the display device is positioned through the environment image;
in cyclic traversalAfter the step of (a), further comprising:
calculating the distance of the monitoring personnel according to the environment image;
acquiring a view field coverage corresponding to the distance, wherein the view field coverage is a display area range which is intersected with the surface of the display device and takes the view focus as a center point, and the preset view angle range of the monitoring personnel;
determination ofCorresponding video display area->
Judging the video display areaWhether to intersect the field of view coverage;
when the video display areaWhen not crossing the field coverage, skip from +.>Y value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>According to the pixel distribution quantityFor->A step of processing the pixel data in the step;
according to direct executionUpdating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
Fig. 4 shows that when only one monitoring person is watching the display device, the field coverage of the monitoring person is larger, and conversely, the field coverage corresponding to the field angle θ is smaller, and among a plurality of video display areas shown by the display device in the figure, the video display area with the diagonal background is the video display area intersected with the field coverage.
As shown in fig. 5, a second aspect of the present application proposes a multi-channel-based video display system including a video source for acquiring video monitoring data of a monitoring place, a display device for displaying a multi-channel video display window, an ambient brightness sensor for acquiring ambient brightness data, an image pickup device for acquiring an ambient image directly in front of the display device, and a control device including a processor and a memory, the processor executing a computer program stored by the memory to implement the multi-channel-based video display method of any one of the first aspect of the present application.
Specifically, in some embodiments of the present application, the ambient brightness sensor, the image capturing device, the display device, and the control device are integrated into one large screen control terminal. In other embodiments of the present application, the ambient light sensor and the image capturing device are separate devices, the control device may be a personal computer or a workstation, and the display device is a display of the personal computer or the workstation.
It should be noted that in this document relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Embodiments in accordance with the present application, as described above, are not intended to be exhaustive or to limit the application to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best utilize the application and various modifications as are suited to the particular use contemplated. The application is limited only by the claims and the full scope and equivalents thereof.

Claims (7)

1. A multi-channel based video display method, comprising:
constructing video source objects for reading video surveillance data for each video sourceWherein->,/>A number of video sources;
initializing the video source objects by using addresses of video sources so that each video source object is associated with the video source to acquire corresponding video monitoring data;
constructing a frame vector for reading a data frame of the video surveillance dataSaid frame vector->Comprises->Mat object for storing data frames of a corresponding video source>
Build for reading and writingIs +.>Vitamin brightness matrix->Wherein->A total number of pixels for each data frame;
creating a multi-channel video display window, the multi-channel video display window comprisingVideo display area->
Reading a layout configuration file of the multichannel video, wherein the layout configuration file comprises a layout code of each video source;
establishing a corresponding relation between the layout codes of each video source and the video display area;
configuring brightness distribution intervals of preset interval numberWherein->,/>The preset interval number is the preset interval number;
before receiving the end instruction, the following steps are circularly executed:
traversing each video source objectReading the data frame of the video monitoring data from the corresponding video source;
converting the data frame into YUV color coding format and writing into the frame vectorIs->
Cycle traversal
From the slaveY value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>
Using the brightness matrixStatistics of->Falls within each brightness distribution interval->Number of pixel distributions in (a)
According to the distribution quantity of the pixelsFor->Processing the pixel data in the pixel array;
according toUpdating the corresponding video display area in the multi-channel video display window +.>A display screen of the display device;
according to the distribution quantity of the pixelsFor->The step of processing the pixel data in the step of processing the pixel data specifically comprises the following steps:
according to the distribution quantity of the pixelsIn the->Determining the minimum pixel difference luminance line in the pixel data in (1)>So that said->A luminance line +_greater than the minimum pixel difference among the pixel data in (a)>And a luminance line less than the minimum pixel difference>The difference between the number of pixels is minimal;
obtaining brightness reference line
According to the minimum pixel difference brightness lineAnd the brightness reference line->Modification->Y value in YUV color coding of each pixel; according to the number of pixel distributions->In the->The step of determining the minimum pixel difference luminance line in the pixel data in (a) specifically includes:
initializing a temporary pixel difference value:
initializing a minimum pixel difference luminance line:
at 1 to 1Cycle traversal between->Values to perform the following steps:
calculating a temporary pixel difference value:
when (when)When in use, make->Continuing after thatExecuting the above loop;
when (when)When calculating the minimum pixel difference brightness line:
and ends the traversalCycling of values;
when (when)At this time, a first temporary pixel difference value is calculated:
,
calculating a second temporary pixel difference value:
when (when)When calculating the minimum pixel difference brightness line:
and ends the traversalCycling of values;
when (when)When calculating the minimum pixel difference brightness line:
and ends the traversalCycling of values;
according to the minimum pixel difference brightness lineAnd the brightness reference line->Modification->The step of Y-value in YUV color coding of each pixel specifically comprises:
calculating the minimum pixel difference brightness lineAnd the brightness reference line->Is a luminance difference value of (a):
letting the brightness matrixIs>Each luminance value in a row minus +.>Obtain a new brightness matrix->
Will beY value in YUV color coding of each pixel is modified to the luminance matrix +.>First->Brightness values corresponding to rows;
at the time of making the brightness matrixIs>Each luminance value in a row minus +.>Obtain a new brightness matrix->In the step (a)>Is positive and the brightness matrix +.>Is>Any luminance value in a row is smaller than +.>When a new brightness matrix is made +.>Is>The corresponding luminance value in the row is 0, when +.>When negative and the brightness matrix +.>Is>Any luminance value in a row minus +.>When the brightness matrix is larger than 255, a new brightness matrix is made +.>Is>The corresponding luminance value in the row is 255.
2. The multi-channel based video display method according to claim 1, wherein the luminance line is based on the minimum pixel differenceAnd the brightness reference line->Modification->The step of Y-value in YUV color coding of each pixel specifically comprises:
calculating the minimum pixel difference brightness lineAnd the brightness reference line->Is a luminance difference value of (a):
traversing the luminance matrixIs>Each luminance value in a row:
when (when)When calculating the matrix brightness difference value:
when (when)When calculating the matrix brightness difference value:
wherein the method comprises the steps of,/>For the brightness matrix->Middle->Line>A plurality of luminance values;
letting the brightness matrixIs>Line>Subtracting +.>Obtain a new brightness matrix->
Will beY value in YUV color coding of each pixel is modified to the luminance matrix +.>First->The brightness value corresponding to the row.
3. The multi-channel based video display method of claim 1, wherein a luminance reference line is obtainedThe method specifically comprises the following steps:
reading ambient brightness data from an ambient brightness sensor and converting the ambient brightness data into ambient brightness values
Obtaining an upper bound of an ambient brightness value rangeAnd lower bound->
Calculating the brightness reference line:
4. the multi-channel based video display method of claim 1, wherein the video display method is traversed in a loopAfter the step of (a), further comprising:
acquiring an environment image in front of a display device for displaying a multi-channel video display window from an image pickup device;
determining whether monitoring personnel watching the display device exist in the environment image through face recognition;
skipping the slave when no monitoring person watching the display device exists in the environment imageY value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>According to the pixel distribution quantityFor->A step of processing the pixel data in the step;
according to direct executionUpdating the corresponding video display area in the multi-channel video display windowA step of displaying a picture.
5. The multi-channel based video display method according to claim 4, further comprising, after the step of determining whether there is a monitor person viewing the display device in the environment image through face recognition:
determining the number of monitoring persons watching the display device
The number of monitoring persons who are watching the display deviceAt the time, execute from->Y value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>According to the number of pixel distributionFor->Processing is performed on the pixel data according to +.>Updating the corresponding video display area in the multi-channel video display window +.>A step of displaying a picture.
6. The multi-channel based video display method according to claim 5, further comprising, after the step of determining whether there is a monitor person viewing the display device in the environment image through face recognition:
monitoring personnel watching the display device are present in the environment image and monitoring of the display device is being watchedNumber of peopleWhen the monitoring personnel is positioned, the position of the sight focus of the monitoring personnel on the display device is positioned through the environment image;
in cyclic traversalAfter the step of (a), further comprising:
calculating the distance of the monitoring personnel according to the environment image;
acquiring a visual field coverage corresponding to the distance, wherein the visual field coverage is a display area range which is intersected with the surface of the display device and takes the focus of the sight as a center point, and the preset visual field angle range of the monitoring personnel;
determination ofCorresponding video display area->
Judging the video display areaWhether to intersect the field of view coverage;
when the video display areaWhen not crossing the field coverage, skip from +.>Y value in YUV color coding of each pixel is extracted and written into the brightness matrix +.>Using the luminance matrix->Statistics of->Falls within each brightness distribution interval->The number of pixel distributions +.>According to the pixel distribution quantityFor->A step of processing the pixel data in the step;
according to direct executionUpdating the corresponding video display area in the multi-channel video display windowA step of displaying a picture.
7. A multi-channel based video display system comprising a video source for acquiring video monitoring data of a monitoring location, a display device for displaying a multi-channel video display window, an ambient brightness sensor for acquiring ambient brightness data, an image capturing device for acquiring an ambient image directly in front of the display device, and a control device comprising a processor and a memory, the processor executing a computer program stored by the memory to implement the multi-channel based video display method according to any one of claims 1-6.
CN202310764216.4A 2023-06-27 2023-06-27 Video display method and system based on multiple channels Active CN116506562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310764216.4A CN116506562B (en) 2023-06-27 2023-06-27 Video display method and system based on multiple channels

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310764216.4A CN116506562B (en) 2023-06-27 2023-06-27 Video display method and system based on multiple channels

Publications (2)

Publication Number Publication Date
CN116506562A CN116506562A (en) 2023-07-28
CN116506562B true CN116506562B (en) 2023-09-05

Family

ID=87316972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310764216.4A Active CN116506562B (en) 2023-06-27 2023-06-27 Video display method and system based on multiple channels

Country Status (1)

Country Link
CN (1) CN116506562B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778246A (en) * 2010-01-29 2010-07-14 华为终端有限公司 Method and device for processing multipicture video image
CN102857739A (en) * 2012-08-20 2013-01-02 上海光亮光电科技有限公司 Distributed panorama monitoring system and method thereof
CN105898230A (en) * 2016-05-20 2016-08-24 深圳英飞拓科技股份有限公司 Spliced image brightness balancing method and device based on multiple input channels
CN108650495A (en) * 2018-06-28 2018-10-12 华域视觉科技(上海)有限公司 A kind of automobile-used panoramic looking-around system and its adaptive light compensation method
CN114463208A (en) * 2022-01-25 2022-05-10 润建股份有限公司 Dimension-building monitoring video image enhancement method based on contrast-limiting adaptive histogram equalization method
CN115602092A (en) * 2022-09-14 2023-01-13 中国船舶集团有限公司第七〇九研究所(Cn) Brightness self-adaptive cross-screen display method and device applied to tiled display
CN116156140A (en) * 2022-12-20 2023-05-23 深圳市洲明科技股份有限公司 Video display processing method, device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778246A (en) * 2010-01-29 2010-07-14 华为终端有限公司 Method and device for processing multipicture video image
CN102857739A (en) * 2012-08-20 2013-01-02 上海光亮光电科技有限公司 Distributed panorama monitoring system and method thereof
CN105898230A (en) * 2016-05-20 2016-08-24 深圳英飞拓科技股份有限公司 Spliced image brightness balancing method and device based on multiple input channels
CN108650495A (en) * 2018-06-28 2018-10-12 华域视觉科技(上海)有限公司 A kind of automobile-used panoramic looking-around system and its adaptive light compensation method
CN114463208A (en) * 2022-01-25 2022-05-10 润建股份有限公司 Dimension-building monitoring video image enhancement method based on contrast-limiting adaptive histogram equalization method
CN115602092A (en) * 2022-09-14 2023-01-13 中国船舶集团有限公司第七〇九研究所(Cn) Brightness self-adaptive cross-screen display method and device applied to tiled display
CN116156140A (en) * 2022-12-20 2023-05-23 深圳市洲明科技股份有限公司 Video display processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN116506562A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US10630899B2 (en) Imaging system for immersive surveillance
US7880739B2 (en) Virtual window with simulated parallax and field of view change
US8953048B2 (en) Information processing apparatus and control method thereof
US8547421B2 (en) System for adaptive displays
US9615062B2 (en) Multi-resolution image display
CN106101610B (en) Image display system, information processing equipment and image display method
CN110660023A (en) Video stitching method based on image semantic segmentation
CN106373148A (en) Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system
CN109816745A (en) Human body thermodynamic chart methods of exhibiting and Related product
SG191198A1 (en) Imaging system for immersive surveillance
CN109348088A (en) Image denoising method, device, electronic equipment and computer readable storage medium
KR102019031B1 (en) Apparatus, system and method for real-time parking lot video object recognition from cctv parking lot video data
CN113179673B (en) Image monitoring device applying multi-camera moving path tracking technology
US11206376B2 (en) Systems and methods for image processing
CN106447788B (en) Method and device for indicating viewing angle
US20100245584A1 (en) Method and apparatus for creating a zone of interest in a video display
CN102945563A (en) Showing and interacting system and method for panoramic videos
CN115375779B (en) Method and system for camera AR live-action annotation
CN110324572A (en) Monitoring system, monitoring method and non-transitory computer-readable storage media
CN112351266B (en) Three-dimensional visual processing method, device, equipment, display system and medium
WO2023202216A9 (en) Image processing method and apparatus, and storage medium
CN109803100A (en) A kind of ghost method that adaptively disappears
US7130463B1 (en) Zoomed histogram display for a digital camera
CN202872936U (en) 360-degree non-blind area panorama video shooting device based on regular polyhedron
CN116506562B (en) Video display method and system based on multiple channels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant