CN111818387A - Display content identification method, identification device, identification system and computer-readable storage medium for rail transit - Google Patents

Display content identification method, identification device, identification system and computer-readable storage medium for rail transit Download PDF

Info

Publication number
CN111818387A
CN111818387A CN202010613965.3A CN202010613965A CN111818387A CN 111818387 A CN111818387 A CN 111818387A CN 202010613965 A CN202010613965 A CN 202010613965A CN 111818387 A CN111818387 A CN 111818387A
Authority
CN
China
Prior art keywords
stream
video stream
graphics stream
display content
coded graphics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010613965.3A
Other languages
Chinese (zh)
Inventor
谢正光
徐会杰
楚柏青
赵丞皓
张欣
孙新
张骄
张衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Subway Operation Technology R & D Center Beijing Subway Operation Co ltd
Original Assignee
Subway Operation Technology R & D Center Beijing Subway Operation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Subway Operation Technology R & D Center Beijing Subway Operation Co ltd filed Critical Subway Operation Technology R & D Center Beijing Subway Operation Co ltd
Priority to CN202010613965.3A priority Critical patent/CN111818387A/en
Publication of CN111818387A publication Critical patent/CN111818387A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application relates to a display content identification method, an identification device, an identification system and a computer readable storage medium for rail transit, wherein the identification method comprises the steps of obtaining a video stream; generating a first feature code corresponding to the video stream and coding the first feature code to obtain a first coded graphics stream; inserting the generated first coded graphics stream into a corresponding video stream and sending the first coded graphics stream; the method comprises the steps of obtaining a second characteristic code corresponding to the graphics stream after the first coded graphics stream is played, comparing the second characteristic code with the first characteristic code, verifying the video stream being played by using the method through the identification device and the identification system, and storing a code corresponding to the display content identification method for track traffic in a computer-readable storage medium. According to the method and the device, the played video stream is identified by inserting the verification information into the played video stream and carrying out secondary verification on the verification information, so that the playing safety can be effectively improved.

Description

Display content identification method, identification device, identification system and computer-readable storage medium for rail transit
Technical Field
The present application relates to the field of video information security technologies, and in particular, to a method, an apparatus, a system, and a computer-readable storage medium for identifying display content for track traffic.
Background
The video playing usually has multiple sources, such as a central streaming media, a remotely distributed video file, a signal system, an information source central distribution system, and the like, and the transmission path thereof also has multiple modes, but no matter the source and the transmission path of the information are different, the information is finally displayed through a screen, that is, the screen is the final convergence destination and the output port of all the information.
In the video playing process, various types of intrusion such as signal source intrusion and artificial intrusion are faced, and an intruder can use other video sources to replace the video source being played, so that how to quickly identify the played video source and ensure the playing safety becomes an important research and development subject.
Disclosure of Invention
In order to improve the security of playing contents, the application provides a display content identification method, an identification device, an identification system and a computer readable storage medium for track traffic.
The above object of the present application is achieved by the following technical solutions:
in a first aspect, the present application provides a method for identifying display content for rail transit, including:
obtaining a video stream;
generating a first feature code corresponding to the video stream and coding the first feature code to obtain a first coded graphics stream;
inserting the generated first coded graphics stream into a corresponding video stream and sending the first coded graphics stream;
obtaining a second feature code corresponding to the displayed graphics stream of the first coded graphics stream; and
comparing the second characteristic code with the first characteristic code stream, wherein when the comparison result is consistent, the video stream in playing and the obtained video stream are consistent, and when the comparison result is inconsistent, the video stream in playing is abnormal;
the first coded graphics stream comprises static image information, and image-free time periods are arranged between adjacent static image information time periods in a time sequence corresponding to the first coded graphics stream.
By adopting the technical scheme, the first characteristic code is synchronously generated according to the played video stream, then the first characteristic code is compiled into a first coded graphic stream and inserted into the video stream, then the video stream is sent to the appointed equipment for playing, simultaneously the played first coded graphic stream is collected and decompiled to obtain a second characteristic code, the second characteristic code is compared with the first characteristic code, the comparison result is consistent to indicate that the video stream being played is normal, and the comparison result is inconsistent to indicate that the video stream being played is abnormal, the mode verifies the video stream needing to be played by inserting verification information into the video stream to be played and comparing the verification information with the original verification information after collecting the verification information, the video stream being played can be quickly identified, and the image-free time period is inserted into the first coded graphic stream, the complexity of the first coded graphics stream is increased, the difficulty of cracking the first coded graphics stream after the complexity is increased, and the corresponding security is also increased.
In a preferred example of the first aspect: the number of still picture information periods having different lengths per unit time length in the time series corresponding to the first encoded graphics stream is two or more.
By adopting the technical scheme, the static image information time periods with different time lengths are inserted into the first coding graphics stream, so that the complexity of the first coding graphics stream is increased, the difficulty of cracking the first coding graphics stream is increased after the complexity is increased, and the corresponding safety is also increased.
In a preferred example of the first aspect: the number of non-image time periods having different lengths per unit time length in the time series corresponding to the first encoded graphics stream is two or more.
By adopting the technical scheme, the image-free time periods with different time lengths are inserted into the first coding graphics stream, so that the complexity of the first coding graphics stream is increased, the difficulty of cracking the first coding graphics stream is increased after the complexity is increased, and the corresponding safety is also increased.
In a preferred example of the first aspect: the first encoded graphics stream is located at an edge of a display area of a corresponding video stream.
By adopting the technical scheme, the display area of the first coded graphics stream is adjusted to the edge of the display area of the video stream, so that the influence of the display of the first coded graphics stream on the display of the video stream can be reduced.
In a preferred example of the first aspect: the first encoded graphics stream is located at an intersection of adjacent edges of a display area corresponding to the video stream.
By adopting the technical scheme, the display area of the first coded graphics stream is adjusted to the boundary of the adjacent edges of the display area of the video stream, and the influence of the display of the first coded graphics stream on the display of the video stream is further reduced.
In a preferred example of the first aspect: and triggering a disposal condition when the comparison result of the first feature code is inconsistent with the comparison result of the second feature code.
By adopting the above technical scheme, when the comparison result of the first characteristic code and the second characteristic code is inconsistent, it is indicated that the video stream being played is not the video stream needing to be played, and at this time, a handling condition is triggered to process the video stream being played, so that the influence is minimized.
In a preferred example of the first aspect: the handling condition comprises sending out warning information, playing default video stream, skipping the video stream which is being sent out, stopping sending out the video stream or closing the display terminal.
By adopting the technical scheme, several types of treatment conditions are provided, selection can be performed according to actual conditions, and the use scenes are richer.
In a second aspect, the present application provides a display content recognition apparatus, including:
a first acquisition unit that acquires a video stream;
the first coding unit generates a first feature code corresponding to the video stream and compiles the first feature code to obtain a first coded graphics stream, wherein the first coded graphics stream comprises static image information, and image-free time periods are arranged between adjacent static image information time periods on a time sequence corresponding to the first coded graphics stream;
a communication unit for inserting the generated first coded graphics stream into a video stream corresponding thereto and transmitting the first coded graphics stream;
the second acquisition unit is used for acquiring a second feature code corresponding to the displayed graphics stream of the first coded graphics stream; and
and the comparison unit is used for comparing the second characteristic code with the first characteristic code stream and giving a comparison result.
By adopting the technical scheme, the first characteristic code is synchronously generated according to the played video stream, then the first characteristic code is compiled into a first coded graphic stream and inserted into the video stream, then the video stream is sent to the appointed equipment for playing, simultaneously the played first coded graphic stream is collected and decompiled to obtain a second characteristic code, the second characteristic code is compared with the first characteristic code, the comparison result is consistent to indicate that the video stream being played is normal, and the comparison result is inconsistent to indicate that the video stream being played is abnormal, the mode verifies the video stream needing to be played by inserting verification information into the video stream to be played and comparing the verification information with the original verification information after collecting the verification information, the video stream being played can be quickly identified, and the image-free time period is inserted into the first coded graphic stream, the complexity of the first coded graphics stream is increased, the difficulty of cracking the first coded graphics stream after the complexity is increased, and the corresponding security is also increased.
In a preferred example of the second aspect: the device also comprises a second coding unit;
the second encoding unit is configured to insert, between adjacent still image information periods, image-free periods of equal or unequal length in a time series corresponding to the first encoded graphics stream.
The image-free time periods with different time lengths are inserted into the first coding graphics stream, so that the complexity of the first coding graphics stream is increased, the difficulty of cracking the first coding graphics stream after the complexity is increased, and the corresponding safety is also increased.
In a preferred example of the second aspect: the device also comprises a third coding unit;
the third encoding unit is configured to adjust a length of the still image information period in a time series corresponding to the first encoded graphics stream.
By adopting the technical scheme, the static image information time periods with different time lengths are inserted into the first coding graphics stream, so that the complexity of the first coding graphics stream is increased, the difficulty of cracking the first coding graphics stream is increased after the complexity is increased, and the corresponding safety is also increased.
In a preferred example of the second aspect: also included is a first position adjustment unit for adjusting a display position of the first encoded graphics stream to an edge of a display area of the corresponding video stream.
By adopting the technical scheme, the display area of the first coded graphics stream is adjusted to the edge of the display area of the video stream, so that the influence of the display of the first coded graphics stream on the display of the video stream can be reduced.
In a preferred example of the second aspect: the video stream display device further comprises a second position adjusting unit, wherein the second position adjusting unit is used for adjusting the display position of the first coding graphics stream to the boundary of the adjacent edges of the display area of the corresponding video stream.
By adopting the technical scheme, the display area of the first coded graphics stream is adjusted to the boundary of the adjacent edges of the display area of the video stream, and the influence of the display of the first coded graphics stream on the display of the video stream is further reduced.
In a preferred example of the second aspect: the processing unit is used for triggering a processing condition when the comparison result of the second coded graphics stream is inconsistent with the comparison result of the first coded graphics stream.
By adopting the above technical scheme, when the comparison result between the second encoded graphics stream and the first encoded graphics stream is inconsistent, it is indicated that the video stream being played is not the video stream that needs to be played, and at this time, a handling condition is triggered to process the video stream being played, so that the influence is minimized.
In a preferred example of the second aspect: the handling condition comprises sending out warning information, skipping the video stream being sent out, playing a default video stream, stopping sending out the video stream or closing the display terminal.
By adopting the technical scheme, several types of treatment conditions are provided, selection can be performed according to actual conditions, and the use scenes are richer.
In a third aspect, the present application provides a display content identification system, the system comprising:
one or more memories for storing instructions; and
one or more processors, configured to invoke and execute the instructions from the memory, and execute any one of the display content identification methods for rail transit as described in the first aspect and the preferred examples of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium comprising:
a program that, when executed by a processor, performs any one of the display content recognition methods for track traffic as described in the first aspect and the preferred examples of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising program instructions for executing any one of the display content recognition methods for rail transit described in the first aspect and the preferred examples of the first aspect when the program instructions are executed by a computing device.
In a sixth aspect, the present application provides a system on a chip comprising a processor configured to perform the functions recited in the above aspects, such as generating, receiving, sending, or processing data and/or information recited in the above methods.
In one possible design, the system-on-chip further includes a memory for storing necessary program instructions and data. The chip system may be formed by a chip, or may include a chip and other discrete devices. The processor and the memory may be decoupled, disposed on different devices, connected in a wired or wireless manner, or coupled on the same device.
Drawings
Fig. 1 is a flowchart of a display content identification method for rail transit according to an embodiment of the present application.
FIG. 2 is a diagram of some first encoded graphics streams provided by an embodiment of the present application.
FIG. 3 is a schematic diagram of other first encoded graphics streams provided by embodiments of the present application.
Fig. 4 is a schematic display diagram of a video stream and a first encoded graphics stream inserted into the video stream according to an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating a display of another video stream and a first encoded graphics stream inserted into the video stream according to an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating a display of a video stream and a first encoded graphics stream inserted into the video stream according to an embodiment of the present application.
Detailed Description
The technical solution of the present application will be described in further detail below with reference to the accompanying drawings.
In order to provide a clearer understanding of the present application, several existing identification methods will be described first.
The first existing identification method is manual identification, such as manual monitoring or inspection, which is the lowest cost, but the timeliness cannot be guaranteed, and especially when there are multiple screens in the management range, omission easily occurs.
The second existing recognition method is visual analysis, which is to collect the played video and then perform feature recognition to judge and obtain the result. This method requires huge system resources, especially uncertainty of the playing source, and also requires updating of the matching algorithm, which results in high recognition cost.
The third existing identification mode is to insert a two-dimensional code into a played video, the identification cost of the two-dimensional code is relatively high, and the identification process has certain hysteresis, so that quick identification cannot be realized. Of course, there is another way to identify the color of a specific area in the video, which is easy to falsify the color acquisition and to cheat the identification algorithm.
Referring to fig. 1, a method for identifying display content for rail transit disclosed in an embodiment of the present application includes:
s101, obtaining a video stream.
S102, generating a first feature code corresponding to the video stream and coding the first feature code to obtain a first coded graphics stream.
S103, inserting the generated first coded graphics stream into the corresponding video stream and sending the first coded graphics stream.
S104, obtaining a second feature code corresponding to the graphics stream after the first coded graphics stream is played.
And S105, comparing the second characteristic code with the first characteristic code stream, wherein if the comparison result is consistent, the video stream in the playing process is consistent with the obtained video stream, and if the comparison result is inconsistent, the video stream in the playing process is abnormal.
In steps S101 and S102, an identification information, i.e. a first feature code, is inserted into the video stream to be played. It should be understood that the first signature is a string of characters that may be arranged in binary, decimal, or a mixture of numbers and letters, such as:
0100101001010101010101010010101001010100010110……
4737533545365628475893562908493573452074507549……
FHISJGHDJF380FJDFJ87900DFSHF34848DSHFHEW8F789S……
of course other arrangements of generation are possible.
It should also be understood that the first signature may be static or dynamic. The static first characteristic identification code is a string of characters with a specific arrangement sequence, and the string of characters are repeatedly used in the using process; the dynamic first characteristic identification code is a string of characters generated according to a specific algorithm or random arrangement, the characters are constantly changed on a time sequence, obviously, the dynamic cracking difficulty is higher, and the safety is higher.
Since the generated first feature identification code is also difficult and expensive to directly identify, it is necessary to perform secondary compilation, that is, a first encoded graphics stream is generated based on the first feature identification code, and the generated first encoded graphics stream can be inserted into a video stream and sent out in synchronization with the video stream to play.
The first coded graphics stream is composed of still image information, and in the time sequence of playing or displaying, the time period in which the still image information appears is called a still image information time period, the time period in which the still image information does not appear is called an image-free time period, and the still image information time period and the image-free time period are alternately arranged.
It should be understood that, for playing or displaying, the image is colored, the state at this time may be referred to as "bright", the image is not black, and the state at this time may be referred to as "dark", so that the corresponding area of the first encoded graphics stream flickers during playing or displaying.
It should be understood here that the difficulty of acquiring "light" and "dark" is significantly lower compared to direct image recognition, two-dimensional code recognition or color recognition.
In some possible implementations, the flicker frequency of the region can be analyzed, see (a), (b) and (c) in fig. 2, which show the flicker state of the display region, and it can be seen that the flicker state can be adjusted, and if the "bright" state is denoted as 1, the "dark" state is denoted as 0.
In a unit time length, a state with "bright" is written as 1, and a state without "bright" is written as 0.
Then a continuous string can be obtained as follows:
0100101001010101010101010010101001010100010110……
during parsing, a fixed number of bits may be selected as a group, for example, a three-bit number may be selected as a group, and the following contents are obtained:
010,010,100,101……
according to the binary rule, 010 corresponds to 2 in decimal, 100 corresponds to 4 in decimal, and 101 corresponds to 5 in decimal, so that a string of decimal numbers can be obtained.
The grouping may be a set of three or four digits, although it is possible to group the numbers according to other specific rules.
In addition, the obtained decimal digits can also be grouped according to a certain rule, and the following contents are obtained:
34345734535790894845023950……
then, a string of characters can be obtained according to the mapping relationship, so that the difficulty of cracking can be further increased, for example, one or more digits are used as a group, letters, special characters and the like are corresponding to the mapping relationship, and the feature codes can be encrypted and decrypted through rules and the mapping relationship so as to ensure the safety of the feature codes.
It is feasible that, in one unit time length, the state of "bright" is recorded as 1, and the number of times of brightness is recorded.
Then a continuous string can be obtained as follows:
064586757823945375950……
then, the corresponding feature code can be obtained according to the mapping relationship, and the step is the same as the above-mentioned step, which is not described herein again.
It should be understood that the process of compiling a feature code into a video stream with a flash on one end and parsing the flash video stream into a specific code is essentially the same, except that one is forward and one is reverse.
The content of step S103 is to insert the generated first encoded graphics stream into the corresponding video stream and send it out, and it should be understood that the video stream can be sent to various devices such as a display, a display screen, and a combined display screen for display, as long as the device can receive and display the video stream.
In step S104, a second feature code corresponding to the graphics stream after the first encoded graphics stream is played is obtained, and referring to the contents in step S102 and step S103, after the first encoded graphics stream is played, the corresponding display region is blinking, so that a segment of data stream can be generated after data acquisition is performed on the region, and after the data stream is analyzed, a segment of feature code can be obtained according to the mapping relationship, where the feature code is referred to as the second feature code.
In step S105, the second feature code is compared with the first feature code stream, if the played video stream is the video stream obtained in step S101, the first feature code and the second feature code should be consistent because they follow the same rule, and if the played video stream is not the video stream obtained in step S101, the comparison result of the first feature code and the second feature code is inconsistent, which indicates that the displayed content is incorrect, and corresponding measures need to be taken immediately for processing.
Referring to fig. 3 (d), as a specific embodiment of the method for identifying display content for rail transit, the length of the static image information time period in the first encoded graphics stream in the time sequence is adjusted, and the number of the static image information time periods with different lengths and times in the unit time length is two or more, so that the time parameter is added as a supplement, and the difficulty in cracking the feature code can be obviously increased.
Of course, the represented information may be other information, and may be determined according to the encoding and mapping relationship.
Referring to fig. 3 (e), as a specific embodiment of the method for identifying display content for rail transit, the length of the image-free time period in the first encoded graphics stream in the time sequence is adjusted, and the number of the static image information time periods with different lengths and times in the unit time length is two or more, so that the time parameter is added as a supplement, and the difficulty in cracking the feature code can be obviously increased after the length and time of the static image information time period are changed.
Of course, the represented information may be other information, and may be determined according to the encoding and mapping relationship.
In addition, please refer to (f) in fig. 3, for the feature coding, it is also possible to adjust the lengths of the static image information time period and the image-free time period in the time sequence, so as to further increase the security of the feature coding and reduce the possibility of being cracked.
Referring to fig. 4 and 5, the solid line boxes in the figures represent the display area of the video stream, and the solid line boxes represent the display area of the first encoded graphics stream, and as an embodiment of the method for identifying display content for rail transit, the first encoded graphics stream is located at the edge of the display area corresponding to the video stream, so that the viewing comfort can be improved. It should be understood that the video stream obtained in step S101 is intended to have a recipient, for example, in a station, the recipient being a traveler in the station, and in a mall, the recipient being a customer in the mall. It should also be understood that during the playing process of the first encoded graphics stream, the corresponding region is blinking, which may affect the normal viewing of the recipient, so that shifting it to the edge of the display region can significantly improve the viewing comfort.
Referring to fig. 6, the solid line boxes in the figure represent the display area of the video stream, and the solid line boxes represent the display area of the first encoded graphics stream, and further, the first encoded graphics stream is located at the boundary of the adjacent edges of the display area corresponding to the video stream, i.e. the corners of the display area.
It should be understood that in public places such as a station or a mall, played information should be managed, and cannot be played at will, so in the process of identifying the content, if the actually played content is found to be inconsistent with the content to be played, that is, the video stream acquired in step S101, it should be understood that measures are taken, and therefore as a specific implementation of the method for identifying display content for rail transit provided by the application, when the comparison result between the second encoded graphics stream and the first encoded graphics stream is inconsistent, a handling condition is triggered.
The handling condition is set for the played video, and the actually played video has the following conditions:
in the first case: the video actually played is the video stream acquired in step S101, which indicates that the playing is normal and does not need to be handled;
in the second case, the actually played video is not the video stream acquired in step S101, which indicates that the playing is abnormal and needs to be handled;
of course, the following situations may also cause abnormal playing:
the first method comprises the following steps: the display area does not display content, and the display area is always in an image-free state, such as screen closing, equipment closing or line damage;
and the second method comprises the following steps: the display area has no display content, and the display area is always in a fixed image state, which may be equipment damage or line damage.
In either case, it is determined that the display content is abnormal. The corresponding handling conditions are also various, such as issuing warning information, playing a default video stream, skipping a video stream being sent, stopping the sending of the video stream, or turning off a display terminal.
In some possible embodiments, when the playback is abnormal, the playback default video stream is used, the video stream being sent is skipped, or the video stream is stopped to be sent, and when the display area is in a no-image state or a fixed-image state, the playback is performed by using the warning information.
A display content recognition apparatus disclosed for an embodiment of the present application, the recognition apparatus includes:
a first acquisition unit that acquires a video stream;
the first coding unit generates a first feature code corresponding to the video stream and compiles the first feature code to obtain a first coded graphics stream, wherein the first coded graphics stream comprises static image information, and image-free time periods are arranged between adjacent static image information time periods on a time sequence corresponding to the first coded graphics stream;
a communication unit that synchronously sends out a video stream and a first encoded graphics stream corresponding to the video stream;
the second acquisition unit is used for acquiring a second feature code corresponding to the displayed graphics stream of the first coded graphics stream; and
and the comparison unit is used for comparing the second characteristic code with the first characteristic code stream and giving a comparison result.
In one example, the units in any of the above apparatuses may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more Digital Signal Processors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), or a combination of at least two of these integrated circuit forms. As another example, when a unit in a device may be implemented in the form of a processing element scheduler, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of invoking programs. As another example, these units may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Furthermore, a second coding unit is added, and in the time sequence corresponding to the first coded graphics stream, the second coding unit is used for inserting image-free time periods with equal or unequal lengths between adjacent static image information time periods, and the number of the image-free time periods with unequal lengths and times in unit time length is two or more, so that time parameters are added, for example, after the length and time of the static image information time periods are changed, the represented information becomes "bright" and is supposed to be "bright" compared with the previous represented information, and the cracking difficulty of the feature coding can be obviously increased.
Furthermore, a third encoding unit is added, and the third encoding unit is used for adjusting the length of the static image information time period on the time sequence corresponding to the first encoding graphics stream.
In this way, the number of the static image information time periods with different lengths and times is two or more in unit time length, so that time parameters are increased, for example, after the length and the time of the static image information time periods are changed, the represented information becomes dark, is supposed to be darker than the previous represented information, and the cracking difficulty of the feature codes can be obviously increased.
Further, a first position adjustment unit for adjusting a display position of the first encoded graphics stream to an edge of a display area of the corresponding video stream is added.
It should be understood that during the playing process of the first encoded graphics stream, the corresponding region is blinking, which may affect the normal viewing of the recipient, so that shifting it to the edge of the display region can significantly improve the viewing comfort.
Furthermore, a second position adjusting unit is added, and the second position adjusting unit is used for adjusting the display position of the first coded graphics stream to the boundary of the adjacent edges of the display area of the corresponding video stream, so that the watching comfort can be further improved.
Further, a handling unit is added, the handling unit is used for triggering a handling condition when the comparison result of the second coded graphics stream is inconsistent with the comparison result of the first coded graphics stream.
In public places such as stations or shopping malls, played information should be managed and cannot be played at will, so in the process of identifying the content, if the actually played content is found to be inconsistent with the content to be played, that is, the video stream acquired in step S101, it should be understood that measures are taken, and therefore as a specific implementation of the method for identifying display content for rail transit provided by the application, when the comparison result between the second encoded graphics stream and the first encoded graphics stream is inconsistent, a handling condition is triggered.
The handling conditions mainly include sending out warning information, skipping over the video stream being sent, playing a default video stream, or stopping sending out the video stream, and the like, which are already stated when explaining the method, and are not described herein again.
The embodiment of the application also discloses a display content identification system, which mainly comprises one or more memories and one or more processors:
the memory is used for storing instructions;
and the processor is used for calling and executing instructions from the memory to execute the display content identification method for the track traffic.
Various objects such as various messages/information/devices/network elements/systems/devices/actions/operations/procedures/concepts may be named in the present application, it is to be understood that these specific names do not constitute limitations on related objects, and the named names may vary according to circumstances, contexts, or usage habits, and the understanding of the technical meaning of the technical terms in the present application should be mainly determined by the functions and technical effects embodied/performed in the technical solutions.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should also be understood that, in various embodiments of the present application, first, second, etc. are used merely to indicate that a plurality of objects are different. For example, the first time window and the second time window are merely to show different time windows. And should not have any influence on the time window itself, and the above-mentioned first, second, etc. should not impose any limitation on the embodiments of the present application.
It is also to be understood that the terminology and/or the description of the various embodiments herein is consistent and mutually inconsistent if no specific statement or logic conflicts exists, and that the technical features of the various embodiments may be combined to form new embodiments based on their inherent logical relationships.
It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
The non-volatile memory may be ROM, Programmable Read Only Memory (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory.
Volatile memory can be RAM, which acts as external cache memory. There are many different types of RAM, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synclink DRAM (SLDRAM), and direct memory bus RAM.
The processor mentioned in any of the above may be a CPU, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the program of the method for transmitting feedback information. The processing unit and the storage unit may be decoupled, and are respectively disposed on different physical devices, and are connected in a wired or wireless manner to implement respective functions of the processing unit and the storage unit, so as to support the system chip to implement various functions in the foregoing embodiments. Alternatively, the processing unit and the memory may be coupled to the same device.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a computer-readable storage medium, which includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned computer-readable storage media comprise: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments of the present invention are preferred embodiments of the present application, and the scope of protection of the present application is not limited by the embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (10)

1. A method for identifying display content for rail transit is characterized by comprising the following steps:
obtaining a video stream;
generating a first feature code corresponding to the video stream and coding the first feature code to obtain a first coded graphics stream;
inserting the generated first coded graphics stream into a corresponding video stream and sending the first coded graphics stream;
obtaining a second characteristic code corresponding to the graphics stream after the first coded graphics stream is played; and
comparing the second characteristic code with the first characteristic code stream, wherein when the comparison result is consistent, the video stream in playing and the obtained video stream are consistent, and when the comparison result is inconsistent, the video stream in playing is abnormal;
the first coded graphics stream comprises static image information, and image-free time periods are arranged between adjacent static image information time periods in a time sequence corresponding to the first coded graphics stream.
2. The method for identifying display content for rail transit according to claim 1, wherein: the number of still picture information periods having different lengths per unit time length in the time series corresponding to the first encoded graphics stream is two or more.
3. The method for identifying display content for rail transit according to claim 1, wherein: the number of non-image time periods having different lengths per unit time length in the time series corresponding to the first encoded graphics stream is two or more.
4. The method for identifying display content for rail transit according to claim 1, wherein: the first encoded graphics stream is located at an edge of a display area of a corresponding video stream.
5. The method for identifying display content for rail transit according to claim 4, wherein: the first encoded graphics stream is located at an intersection of adjacent edges of a display area corresponding to the video stream.
6. The method for identifying display content for rail transit according to any one of claims 1 to 5, wherein: and triggering a disposal condition when the comparison result of the second coded graphics stream is inconsistent with the comparison result of the first coded graphics stream.
7. The method for identifying display content for rail transit according to claim 6, wherein: the handling condition comprises sending out warning information, playing default video stream, skipping the video stream which is being sent out, stopping sending out the video stream or closing the display terminal.
8. A display content recognition apparatus, comprising:
a first acquisition unit that acquires a video stream;
the first coding unit generates a first feature code corresponding to the video stream and compiles the first feature code to obtain a first coded graphics stream, wherein the first coded graphics stream comprises static image information, and image-free time periods are arranged between adjacent static image information time periods on a time sequence corresponding to the first coded graphics stream;
a communication unit for inserting the generated first coded graphics stream into a video stream corresponding thereto and transmitting the first coded graphics stream;
the second acquisition unit is used for acquiring a second feature code corresponding to the displayed graphics stream of the first coded graphics stream; and
and the comparison unit is used for comparing the second characteristic code with the first characteristic code stream and giving a comparison result.
9. A display content recognition system, the system comprising:
one or more memories for storing instructions; and
one or more processors configured to retrieve and execute the instructions from the memory, and perform the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, the computer-readable storage medium comprising:
program which, when executed by a processor, performs a display content recognition method for rail transit according to any one of claims 1 to 7.
CN202010613965.3A 2020-06-30 2020-06-30 Display content identification method, identification device, identification system and computer-readable storage medium for rail transit Pending CN111818387A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010613965.3A CN111818387A (en) 2020-06-30 2020-06-30 Display content identification method, identification device, identification system and computer-readable storage medium for rail transit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010613965.3A CN111818387A (en) 2020-06-30 2020-06-30 Display content identification method, identification device, identification system and computer-readable storage medium for rail transit

Publications (1)

Publication Number Publication Date
CN111818387A true CN111818387A (en) 2020-10-23

Family

ID=72856857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010613965.3A Pending CN111818387A (en) 2020-06-30 2020-06-30 Display content identification method, identification device, identification system and computer-readable storage medium for rail transit

Country Status (1)

Country Link
CN (1) CN111818387A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1617585A (en) * 2004-11-01 2005-05-18 陆健 Mark anti-fake method for compressing video frequency flow
CN101976428A (en) * 2010-07-30 2011-02-16 南开大学 Binary image fragile watermark embedding and extraction method based on topology structure
CN103997652A (en) * 2014-06-12 2014-08-20 北京奇艺世纪科技有限公司 Video watermark embedding method and device
CN105657431A (en) * 2016-02-01 2016-06-08 杭州当虹科技有限公司 Watermarking algorithm based on DCT domain of video frame
CN107040787A (en) * 2017-03-30 2017-08-11 宁波大学 The 3D HEVC inter-frame information hidden methods that a kind of view-based access control model is perceived
CN107318041A (en) * 2017-06-29 2017-11-03 深圳市茁壮网络股份有限公司 The method and system that a kind of Video security is played
CN109379642A (en) * 2018-12-14 2019-02-22 连尚(新昌)网络科技有限公司 It is a kind of for adding and detecting the method and apparatus of video watermark

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1617585A (en) * 2004-11-01 2005-05-18 陆健 Mark anti-fake method for compressing video frequency flow
CN101976428A (en) * 2010-07-30 2011-02-16 南开大学 Binary image fragile watermark embedding and extraction method based on topology structure
CN103997652A (en) * 2014-06-12 2014-08-20 北京奇艺世纪科技有限公司 Video watermark embedding method and device
CN105657431A (en) * 2016-02-01 2016-06-08 杭州当虹科技有限公司 Watermarking algorithm based on DCT domain of video frame
CN107040787A (en) * 2017-03-30 2017-08-11 宁波大学 The 3D HEVC inter-frame information hidden methods that a kind of view-based access control model is perceived
CN107318041A (en) * 2017-06-29 2017-11-03 深圳市茁壮网络股份有限公司 The method and system that a kind of Video security is played
CN109379642A (en) * 2018-12-14 2019-02-22 连尚(新昌)网络科技有限公司 It is a kind of for adding and detecting the method and apparatus of video watermark

Similar Documents

Publication Publication Date Title
JP6937988B2 (en) Dynamic video overlay
CN106599089B (en) Knowledge point-based test question recommendation method and device and user equipment
US8746568B2 (en) Data transfer using barcodes
CN107451643B (en) Generation, recognition methods and the device of dynamic two-dimension code
CN110969207A (en) Electronic evidence processing method, device, equipment and storage medium
CN110929230B (en) Work management method, device, equipment and storage medium
CN114286172B (en) Data processing method and device
US20220038279A1 (en) Blockchain data processing method, apparatus, and device
CN108449627B (en) Video processing method, video source identification method, video processing device, video source identification device and video source identification medium
CN107450840B (en) Method and device for determining finger touch connected domain and electronic equipment
CN111818387A (en) Display content identification method, identification device, identification system and computer-readable storage medium for rail transit
JP6059724B2 (en) Apparatus and method for supporting image data display operation
CN115632780B (en) Use management system and method for seal of Internet of things
CN111860726B (en) Two-dimensional code display method, verification method, device and computer readable storage medium
CN112150447B (en) Abnormal data determination method and device based on location-based service and electronic equipment
CN112101155B (en) Display content verification method, device, system and storage medium
CN101467436B (en) System and method for analyzing and marking cinema
CN112101155A (en) Display content verification method, device, system and storage medium
US20230206634A1 (en) Blockchain recordation and validation of video data
US8566688B2 (en) Method of certifying multiple versions of an application
CN114640866B (en) Multichannel encryption method, device and system based on random dynamics
KR20190051430A (en) Apparatus and Method for Falsification Protection of Video Data
KR102334276B1 (en) System for managing product data
CN113486041B (en) Client portrait management method and system based on block chain
CN116822494B (en) Broadcast play information processing method, apparatus, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023

RJ01 Rejection of invention patent application after publication