CN111405349A - Information implantation method and device based on video content and storage medium - Google Patents

Information implantation method and device based on video content and storage medium Download PDF

Info

Publication number
CN111405349A
CN111405349A CN201910002096.8A CN201910002096A CN111405349A CN 111405349 A CN111405349 A CN 111405349A CN 201910002096 A CN201910002096 A CN 201910002096A CN 111405349 A CN111405349 A CN 111405349A
Authority
CN
China
Prior art keywords
information
value
target object
pixel block
original pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910002096.8A
Other languages
Chinese (zh)
Other versions
CN111405349B (en
Inventor
张美娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910002096.8A priority Critical patent/CN111405349B/en
Publication of CN111405349A publication Critical patent/CN111405349A/en
Application granted granted Critical
Publication of CN111405349B publication Critical patent/CN111405349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/47815Electronic shopping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Abstract

The application provides an information implantation method, an information implantation device and a storage medium based on video content, wherein the method comprises the following steps: determining a target area corresponding to a target object in a video, and dividing the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block; and coding the related information of the original pixel blocks according to the value-added information of the target object according to a preset coding rule, and generating a plurality of coding pixel blocks containing the value-added information so that when the target device scans a target area corresponding to the target object, the coding pixel blocks are analyzed to obtain the value-added information of the target object. By the method, the information related to the target object can be implanted into the target object without invasion, the condition that the video content is shielded is avoided, the continuity of video watching is ensured, and the technical problems that the information implantation can invade, shield the video content and interrupt the continuity of video watching in the prior art are solved.

Description

Information implantation method and device based on video content and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and an apparatus for embedding information based on video content, and a storage medium.
Background
With the improvement of living standard, movie and television plays, small videos and the like gradually become important components of people for entertainment, and the development of the video industry drives the circulation of commodities and the spread of information. For example, the clothes and eating houses (which can be regarded as the props used in the videos) of the characters in the fashion show are pursued by the vermicelli, and the vermicelli is enthusiastic to find the same clothes of the characters in the show, restaurants for dining of the characters in the show and the like.
In order to facilitate a user to obtain information of props in a video, in the related art, the information of the props is implanted in a two-dimensional code or card mode, and a channel for obtaining the information of the props is provided for the user. Specifically, pop out the two-dimensional code or pop out the card that carries the short link in the video playback process, the user obtains the information of stage property in the video through the mode of scanning the two-dimensional code, perhaps, the user obtains the information of stage property through clicking the short link on the card.
However, the intrusion of the above method to the video content can block the video content, interrupt the continuity of the video watched by the user, and affect the watching experience of the user.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the application provides an information embedding method, an information embedding device and a storage medium based on video content, which are used for solving the technical problems that the information embedding invades, blocks the video content and interrupts the video watching consistency in the prior art.
In order to achieve the above object, an embodiment of a first aspect of the present application provides an information embedding method based on video content, including:
determining a target area corresponding to a target object in a video, and dividing the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block;
and coding the related information of the original pixel blocks according to the value-added information of the target object according to a preset coding rule, and generating a plurality of coding pixel blocks containing the value-added information so that when the target device scans a target area corresponding to the target object, the coding pixel blocks are analyzed to obtain the value-added information of the target object.
The information implantation method based on the video content determines a target area corresponding to a target object in a video, divides the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block, and codes related information of the plurality of original pixel blocks according to the value-added information of the target object according to a preset coding rule to generate a plurality of coding pixel blocks containing the value-added information, so that when the target area corresponding to the target object is scanned by target equipment, the plurality of coding pixel blocks are analyzed to obtain the value-added information of the target object. Therefore, the related information of the original pixel blocks is coded according to the value-added information of the target object, and the coded pixel blocks containing the value-added information are generated, so that the value-added information of the target object in the video is fused with the target object, the information related to the target object is implanted into the target object without invasion, and the condition that the video content is shielded is avoided; the value added information of the target object is acquired by scanning the target area corresponding to the target object by using the target equipment and analyzing the plurality of coding pixel blocks, the video watching process is not required to be interrupted, the video watching continuity is ensured, the value added information of the target object can be acquired by scanning, the tedious step that a user manually inputs query information to search is avoided, compared with a manual searching mode, the accuracy of information acquisition is improved, and the information acquisition efficiency is improved.
In order to achieve the above object, a second aspect of the present application provides an information embedding apparatus based on video content, including:
the device comprises a dividing module, a judging module and a judging module, wherein the dividing module is used for determining a target area corresponding to a target object in a video and dividing the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block;
and the coding module is used for coding the related information of the original pixel blocks according to the value-added information of the target object according to a preset coding rule to generate a plurality of coding pixel blocks containing the value-added information, so that when the target device scans a target area corresponding to the target object, the coding pixel blocks are analyzed to obtain the value-added information of the target object.
The information implanting device based on the video content determines a target area corresponding to a target object in a video, divides the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block, and encodes related information of the plurality of original pixel blocks according to the value-added information of the target object according to a preset encoding rule to generate a plurality of encoded pixel blocks containing the value-added information, so that when the target device scans the target area corresponding to the target object, the encoded pixel blocks are analyzed to obtain the value-added information of the target object. Therefore, the related information of the original pixel blocks is coded according to the value-added information of the target object, and the coded pixel blocks containing the value-added information are generated, so that the value-added information of the target object in the video is fused with the target object, the information related to the target object is implanted into the target object without invasion, and the condition that the video content is shielded is avoided; the value added information of the target object is acquired by scanning the target area corresponding to the target object by using the target equipment and analyzing the plurality of coding pixel blocks, the video watching process is not required to be interrupted, the video watching continuity is ensured, the value added information of the target object can be acquired by scanning, the tedious step that a user manually inputs query information to search is avoided, compared with a manual searching mode, the accuracy of information acquisition is improved, and the information acquisition efficiency is improved.
To achieve the above object, a third aspect of the present application provides a computer device, including: a processor and a memory; wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the information implantation method based on video content according to the embodiment of the first aspect.
To achieve the above object, a fourth aspect of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a video content-based information embedding method according to the first aspect.
To achieve the above object, a fifth aspect of the present application provides a computer program product, where instructions of the computer program product, when executed by a processor, implement the information embedding method based on video content according to the first aspect.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart illustrating a video content-based information embedding method according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a video content-based information embedding method according to a second embodiment of the present application;
FIG. 3(a) is a diagram of an example of encoding when color information of an original pixel block corresponds to 1 unit of encoding information;
FIG. 3(b) is a diagram of an example of encoding when color information of an original pixel block corresponds to 2 units of encoding information;
fig. 4 is a schematic flowchart of a video content-based information embedding method according to a third embodiment of the present application;
fig. 5 is a schematic structural diagram of an information embedding apparatus based on video content according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an information embedding apparatus based on video content according to a second embodiment of the present application;
fig. 7 is a schematic structural diagram of an information embedding apparatus based on video content according to a third embodiment of the present application; and
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The information embedding method, apparatus, and storage medium based on video content according to the embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a video content-based information embedding method according to an embodiment of the present disclosure.
As shown in fig. 1, the information embedding method based on video content may include the steps of:
step 101, determining a target area corresponding to a target object in a video, and dividing the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block.
The target object refers to a prop in the video, such as clothes worn by a person in the video, a bag worn by the person, glasses worn by the person, an indoor dining table, a ceiling lamp, a tourist attraction and the like.
In this embodiment, for a target object in a video, a target area corresponding to the target object in the video may be determined, and the target area is divided into a plurality of original pixel blocks according to a preset size of a unit pixel block.
The size of the unit pixel block may be preset, for example, the size of the unit pixel block is set to be one pixel; or may be determined according to the size of the target area corresponding to the target object.
As a possible implementation manner, before dividing the target area into a plurality of original pixel blocks according to the size of the preset unit pixel block, the size of the unit pixel block may be determined according to the size of the value-added information of the target object and the size of the target area. For example, when the value-added information of the target object is large and the target area is small, the size of the unit pixel block can be determined to be 1 pixel; when the value-added information of the target object is small and the target area is large, the size of the unit pixel block can be determined to be i x j, wherein i and j are positive integers.
As one possible implementation, before dividing the target region into a plurality of original pixel blocks according to the preset size of the unit pixel block, the size of the unit pixel block may be determined according to the camera module precision of the target device. The target device is a device for scanning a target area where a target object in a video is located to obtain value-added information of the target object, such as a tablet computer and a smart phone. When the precision of the camera module of the target device is high, the unit pixel block can be determined to be small, for example, the unit pixel block is determined to be one pixel; when the accuracy of the camera module of the target device is low and the recognition accuracy cannot reach one pixel on the screen, it may be determined that the unit pixel block has a large size, for example, the unit pixel block is determined to be a 3 × 3 matrix.
Further, after the size of the unit pixel block is determined, the target region corresponding to the target object may be divided into a plurality of original pixel blocks according to the size of the unit pixel block.
For example, assuming that the size of the determined unit pixel block is 1 pixel, each pixel included in the target region is taken as an original pixel block; assuming that the size of the determined unit pixel block is 2 × 2 matrix and the size of the target region is 8 × 8, the target region is divided into 16 original pixel blocks by using the 2 × 2 matrix as a division standard, and one original pixel block includes 4 pixels.
102, coding related information of the plurality of original pixel blocks according to the value-added information of the target object according to a preset coding rule, and generating a plurality of coding pixel blocks containing the value-added information, so that when the target device scans a target area corresponding to the target object, the plurality of coding pixel blocks are analyzed to obtain the value-added information of the target object.
In this embodiment, after the target area is divided into a plurality of original pixel blocks, the related information of the plurality of original pixel blocks may be encoded according to the value-added information of the target object according to a preset encoding rule, so as to generate a plurality of encoded pixel blocks including the value-added information of the target object.
The value-added information of the target object may be related introduction information, a purchase link, a website of an associated website, a brand of the target object, price, and the like of the target object. The value-added information of the target object is represented in the form of binary code (such as ASCII code).
In this embodiment, the value-added information of the target object may be fused in the target region where the target object is located in an encoding manner, so as to implement non-invasive information implantation, so that the implanted value-added information is not perceived by the user, and the viewing experience of the user is not affected.
Specifically, the encoding objects are different, that is, the related information of the original pixel blocks used for encoding is different, and the corresponding encoding rules are also different. When encoding color information of an original pixel block, the encoding rule is to encode 0 or 1 according to the parity of colors (pixel values); when encoding information of an alpha (alpha) channel of an original pixel block, the encoding rule is to encode 0 or 1 according to different variation frequencies of alpha channel values.
For example, when encoding the color information of the original pixel block, the initial pixel values of the original pixel blocks may be preprocessed to change all the initial pixel values of the original pixel blocks into odd numbers, and then the preprocessed pixel values of the original pixel blocks are encoded according to the value added information of the target object, where the pixel value of the encoding 0 is increased by 1 or decreased by 1, that is, the odd pixel value is changed into an even pixel value, and the odd pixel value of the encoding 1 is maintained unchanged.
When encoding the alpha channel information of the original pixel block, the alpha channel value of the original pixel block may be changed at a certain frequency, for example, for the original pixel block of encoding 0, the change frequency of the alpha channel value is set to be 20Hz, and for the original pixel block of encoding 1, the change frequency of the alpha channel value is set to be 30 Hz.
Further, after a plurality of encoded pixel blocks containing the value-added information are generated by encoding, the value-added information of the target object is embedded in the video content. In the process of watching a video, if a user is interested in a certain prop in the video content, wants to acquire more information about the prop, or wants to purchase the prop, the user can scan a target area where the prop is located by using target equipment, and the target equipment can acquire value-added information of a target object by decoding a plurality of coding pixel blocks in the target area, so that the process of manually inputting query information by the user for searching is avoided, and the information related to the target object can be directly acquired by scanning, thereby providing the searching accuracy.
The information implantation method based on video content of this embodiment is to determine a target region corresponding to a target object in a video, divide the target region into a plurality of original pixel blocks according to a preset size of a unit pixel block, and encode related information of the plurality of original pixel blocks according to a preset encoding rule and value added information of the target object to generate a plurality of encoded pixel blocks containing the value added information, so that when a target device scans the target region corresponding to the target object, the plurality of encoded pixel blocks are analyzed to obtain the value added information of the target object. Therefore, the related information of the original pixel blocks is coded according to the value-added information of the target object, and the coded pixel blocks containing the value-added information are generated, so that the value-added information of the target object in the video is fused with the target object, the information related to the target object is implanted into the target object without invasion, and the condition that the video content is shielded is avoided; the value added information of the target object is acquired by scanning the target area corresponding to the target object by using the target equipment and analyzing the plurality of coding pixel blocks, the video watching process is not required to be interrupted, the video watching continuity is ensured, the value added information of the target object can be acquired by scanning, the tedious step that a user manually inputs query information to search is avoided, compared with a manual searching mode, the accuracy of information acquisition is improved, and the information acquisition efficiency is improved.
In the embodiment of the present application, different encoding rules may be adopted to encode the original pixel block of the target region. Next, a description will be given of an implementation process of encoding, according to a preset encoding rule, related information of a plurality of original pixel blocks according to value-added information of a target object, and generating a plurality of encoded pixel blocks including the value-added information, for different encoding rules, respectively.
In a possible implementation manner of the embodiment of the present application, as shown in fig. 2, on the basis of the embodiment shown in fig. 1, step 102 may include the following steps:
step 201, a color change encoding rule corresponding to the color information of each original pixel block is determined.
Step 202, performing data change on the color information of each original pixel block according to a color change coding rule to generate first color information.
In this embodiment, the color change encoding rule corresponding to the color information of each original pixel block may be determined according to the data size that each original pixel block needs to carry.
As an example, when the number of unit encoding information corresponding to the color information of each original pixel block is 1, the color information of each original pixel block is subjected to data change to an even number, resulting in first color information. Of course, the first color information may be obtained by changing the color information of each original pixel block into an odd number.
Wherein, the unit coding information is 1 bit data.
The color information of each original pixel block in the target area can be represented by any one pixel value from 0 to 255, and the pixel values can be divided into an odd number and an even number. In this example, for example, the color information of each original pixel block is changed into an even number, and for each pixel value of the original pixel block, if the pixel value is an even number, the pixel value of the original pixel block is kept unchanged, and if the pixel value is an odd number, 1 is added or subtracted from the pixel value, and the odd pixel value is changed into an even number, thereby obtaining the first color information. After the data change, the first color information of each original pixel block is even.
As an example, when the number of unit encoding information corresponding to the color information of each original pixel block is N, the color information of each original pixel block is subjected to data transformation to 2NObtaining the first color information by integer multiple of the first color information, wherein N is a positive integer.
When the value-added information of the target object is larger and the target area is smaller, one original pixel block may need to carry N data of the value-added information, namely the value-added informationThe data of N bits in the information needs to be encoded in an original pixel block, and at this time, the color information of each original pixel block can be subjected to data change to 2NInteger multiple of (i) k 2N~(k+1)*2NAll numbers in between can be used for encoding, where k is a natural number. Data transforming the color information of each original pixel block to 2NThe integer multiple of (b) can be realized by the following formula:
Figure BDA0001934050020000061
wherein a' is the first color information, a is the color information of the original pixel block, and floor represents rounding-down.
It should be noted that, when the original pixel block only includes one pixel, the pixel value of the pixel is the color information of the original pixel block; when the original pixel block includes a plurality of pixels, the color information of the original pixel block may be obtained by calculating an average value of pixel values of each pixel in the original pixel block and then rounding the obtained average value, or the maximum/minimum pixel value in the original pixel block may be used as the color information of the original pixel block, or a median value in a plurality of pixel values may be selected as the color information of the original pixel block, which is not limited in this application.
And 203, coding the first color information according to the value-added information of the target object to generate a plurality of coding pixel blocks with second color information.
In this embodiment, after the color information of each original pixel block is subjected to data change according to the color change coding rule to generate the first color information, the first color information may be coded according to the value added information of the target object to generate a plurality of coded pixel blocks having the second color information.
In an example, when the number of unit coding information corresponding to the color information of each original pixel block is 1, after the data of the color information of each original pixel block is changed into an even number, the first color information is coded according to the value-added information of the target object, the first color information of the code 1 is added with 1 or subtracted with 1 to become an odd number, the first color information of the code 0 is kept unchanged, and a plurality of coding pixel blocks with second color information are generated, wherein the second color information has odd numbers and even numbers, the odd numbers represent '1' in the value-added information, and the even numbers represent '0' in the value-added information. As shown in fig. 3(a), each square in the upper diagram represents an original pixel block, the value in the original pixel block represents the first color information of the original pixel block, each square in the lower diagram represents a coded pixel block, the value in the coded pixel block represents the second color information, and by decoding the coded pixel block, the value-added information of the target object can be determined to be 01101.
In the second example, when the number of unit coding information corresponding to the color information of each original pixel block is N, assuming that N is 2, the color information of each original pixel block may be subjected to data transformation to an integer multiple of 4, and then the first color information may be encoded according to the value-added information of the target object, thereby generating a plurality of coding pixel blocks having the second color information. For an original pixel block, when 00 is encoded, keeping the first color information of the original pixel block unchanged; when 01 is encoded, 1 is added on the basis of the first color information; when encoding 10, add 2 on the basis of the first color information; when encoding 11, 3 is added on the basis of the first color information. As shown in fig. 3(b), each square in the upper diagram represents an original pixel block, the value in the original pixel block represents the first color information of the original pixel block, each square in the lower diagram represents a coded pixel block, the value in the coded pixel block represents the second color information, and by decoding the coded pixel block, the value-added information of the target object can be determined to be 1100101101.
In the information embedding method based on video content according to the embodiment, the color change coding rule corresponding to the color information of each original pixel block is determined, the color information of each original pixel block is subjected to data change according to the color change coding rule to generate the first color information, and then the first color information is coded according to the value-added information of the target object to generate a plurality of coding pixel blocks with the second color information, so that the purpose of embedding corresponding value-added information into the color information of the target object is achieved, and non-invasive information embedding is achieved.
In a possible implementation manner of the embodiment of the present application, as shown in fig. 4, on the basis of the embodiment shown in fig. 1, step 102 may include the following steps:
in step 301, a frequency variation encoding rule corresponding to an alpha channel value for each original pixel block is determined.
The frequency change encoding rule may be preset, for example, when the change frequency of the alpha channel value is f1, the alpha channel value represents encoding 1; when the variation frequency is f2, the code is 0; when the variation frequency is f3, the code 00 is represented; when the variation frequency is f4, code 01 is represented; when the variation frequency is f5, the code 10 is represented; when the variation frequency is f6, the code 11 is represented; and so on.
In this embodiment, the corresponding frequency change encoding rule may be determined according to the data amount of the value-added information that each original pixel block needs to carry. For example, when each original pixel block needs to carry 1 bit of value-added information, when the change frequency of the alpha channel value is determined as f1 according to the frequency change encoding rule, encoding 1 is represented; when the variation frequency is f2, the code is 0. For another example, when each original pixel block needs to carry 2 bits of value-added information, when the change frequency of the alpha channel value of the frequency change encoding rule is determined to be f3, encoding 00 is represented; when the variation frequency is f4, code 01 is represented; when the variation frequency is f5, the code 10 is represented; when the variation frequency is f6, the code 11 is represented.
At step 302, an average pixel value for each original pixel block is determined.
When each original pixel block only contains one pixel, determining the pixel value of the pixel as the average pixel value of the original pixel block; when the original pixel block includes a plurality of pixels, an average value may be calculated as an average pixel value of the original pixel block according to a pixel value of each pixel in the original pixel block; alternatively, a median value of the plurality of pixel values may be determined as an average pixel value of the original pixel block.
Step 303, encoding the first alpha channel value of each original pixel block according to the frequency change encoding rule, the average pixel value of each original pixel block, and the value added information of the target object, and generating a plurality of encoded pixel blocks having the second alpha channel value.
The related information shows that the alpha channel value is quantized to 0-1, and the variation range of the alpha channel value of 0.01 can be sensed by a camera of the target equipment. Therefore, in this embodiment, the first alpha channel value of the original pixel block may be changed according to the change frequency in the frequency change encoding rule to implement encoding of the value-added information.
As an example, the amount of change in the alpha channel value for each original pixel block may be determined from the average pixel value of the original pixel block. For example, the correspondence relationship between the average pixel value and the change amplitude of the alpha channel value may be stored in advance, with the change amplitude of the corresponding alpha channel value being smaller for larger average pixel values and larger for smaller average pixel values. After the average pixel value of the original pixel block is determined, the corresponding change range of the average pixel value can be determined by querying the corresponding relationship between the average pixel value and the change range of the alpha channel value, and then any value falling within the change range is selected as the change amount of the alpha channel value of the original pixel block. For example, if the variation range is 0.01 to 0.06, any number between 0.01 and 0.06 can be selected as the variation of the alpha channel value, for example, 0.05.
After determining the change amount of the alpha channel value of each original pixel block, the first alpha channel value of each original pixel block may be encoded according to the determined frequency change encoding rule and the value-added information of the target object, and a plurality of encoded pixel blocks having the second alpha channel value may be generated.
For example, if the determined frequency change encoding rule is that the change frequency of the alpha channel value is f1, encoding 1 is represented; when the variation frequency is f2, the code is 0. For an original pixel block with a first alpha channel value of 0.1, the determined amount of change of the alpha channel value is 0.05, and if the original pixel block is used for encoding a "1" in the value-added information, the alpha channel value of the original pixel block is alternated between 0.1 and 0.15 according to the frequency of f 1; if the original tile is used to encode a "0" in the value-added information, the alpha channel value of the original tile is alternated between 0.1 and 0.15 with a frequency of f 2. And obtaining an encoding pixel block by encoding the first alpha channel value, decoding the encoding pixel block after the target device scans the target area, and determining data (0 or 1) in the corresponding value-added information according to the variation frequency of the alpha channel value of the encoding pixel block.
In the information embedding method based on video content according to this embodiment, the frequency change encoding rule corresponding to the alpha channel value of each original pixel block and the average pixel value of each original pixel block are determined, and then the first alpha channel value of each original pixel block is encoded according to the frequency change encoding rule, the average pixel value of each original pixel block and the value-added information of the target object, so as to generate a plurality of encoded pixel blocks having the second alpha channel value, thereby achieving the purpose of embedding corresponding value-added information in the alpha channel information of the original pixel block, and implementing non-intrusive information embedding.
In order to implement the above embodiments, the present application further provides an information embedding apparatus based on video content.
Fig. 5 is a schematic structural diagram of an information embedding apparatus based on video content according to an embodiment of the present application.
As shown in fig. 5, the video content-based information embedding apparatus 50 includes: a partitioning module 510 and an encoding module 520. Wherein the content of the first and second substances,
the dividing module 510 is configured to determine a target region corresponding to a target object in a video, and divide the target region into a plurality of original pixel blocks according to a preset size of a unit pixel block.
In a possible implementation manner of the embodiment of the present application, the dividing module 510 is further configured to determine the size of the unit pixel block according to the size of the value-added information of the target object and the size of the target region before dividing the target region into a plurality of original pixel blocks according to the size of the preset unit pixel block.
In a possible implementation manner of the embodiment of the present application, the dividing module 510 is further configured to determine the size of the unit pixel block according to the accuracy of the camera module of the target device before dividing the target region into a plurality of original pixel blocks according to the preset size of the unit pixel block.
The encoding module 520 is configured to encode, according to a preset encoding rule, the related information of the plurality of original pixel blocks according to the value-added information of the target object, and generate a plurality of encoded pixel blocks including the value-added information, so that when the target device scans a target region corresponding to the target object, the plurality of encoded pixel blocks are analyzed to obtain the value-added information of the target object.
Further, in a possible implementation manner of the embodiment of the present application, as shown in fig. 6, on the basis of the embodiment shown in fig. 5, the encoding module 520 includes:
a first determining unit 5201 determines a color change encoding rule corresponding to the color information of each original pixel block.
A first generating unit 5202 is used for generating first color information by performing data change on the color information of each original pixel block according to a color change coding rule.
In a possible implementation manner of the embodiment of the present application, the first generating unit 5202 is specifically configured to, when the number of unit encoding information corresponding to the color information of each original pixel block is 1, perform data conversion on the color information of each original pixel block to an even number, so as to obtain first color information; when the number of unit encoding information corresponding to the color information of each original pixel block is N, performing data transformation on the color information of each original pixel block to 2NObtaining the first color information by integer multiple of the first color information, wherein N is a positive integer.
The first encoding unit 5203 is configured to encode the first color information according to the value-added information of the target object, and generate a plurality of encoded pixel blocks having the second color information.
Therefore, the purpose of implanting corresponding value-added information into the color information of the target object is achieved, and noninvasive information implantation is realized.
In a possible implementation manner of the embodiment of the present application, as shown in fig. 7, on the basis of the embodiment shown in fig. 5, the encoding module 520 includes:
a second determination unit 5211 for determining a frequency change encoding rule corresponding to an alpha channel value of each original pixel block;
a third determination unit 5212 for determining an average pixel value of each original pixel block.
A second encoding unit 5213, configured to encode the first alpha channel value of each original pixel block according to the frequency change encoding rule, the average pixel value of each original pixel block, and the value added information of the target object, and generate a plurality of encoded pixel blocks having a second alpha channel value.
Therefore, the purpose of implanting corresponding value-added information into the alpha channel information of the original pixel block is achieved, and noninvasive information implantation is achieved.
It should be noted that the foregoing explanation of the embodiment of the information embedding method based on video content is also applicable to the information embedding apparatus based on video content of this embodiment, and the implementation principle is similar, and is not repeated here.
The information implanting device based on the video content determines a target area corresponding to a target object in a video, divides the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block, and encodes related information of the plurality of original pixel blocks according to the value-added information of the target object according to a preset encoding rule to generate a plurality of encoded pixel blocks containing the value-added information, so that when the target device scans the target area corresponding to the target object, the encoded pixel blocks are analyzed to obtain the value-added information of the target object. Therefore, the related information of the original pixel blocks is coded according to the value-added information of the target object, and the coded pixel blocks containing the value-added information are generated, so that the value-added information of the target object in the video is fused with the target object, the information related to the target object is implanted into the target object without invasion, and the condition that the video content is shielded is avoided; the value added information of the target object is acquired by scanning the target area corresponding to the target object by using the target equipment and analyzing the plurality of coding pixel blocks, the video watching process is not required to be interrupted, the video watching continuity is ensured, the value added information of the target object can be acquired by scanning, the tedious step that a user manually inputs query information to search is avoided, compared with a manual searching mode, the accuracy of information acquisition is improved, and the information acquisition efficiency is improved.
In order to implement the foregoing embodiments, the present application also provides a computer device, including: a processor and a memory. Wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the video content-based information embedding method as described in the foregoing embodiments.
FIG. 8 is a block diagram of a computer device provided in an embodiment of the present application, illustrating an exemplary computer device 90 suitable for use in implementing embodiments of the present application. The computer device 90 shown in fig. 8 is only an example, and should not bring any limitation to the function and the scope of use of the embodiments of the present application.
As shown in fig. 8, the computer device 90 is in the form of a general purpose computer device. The components of computer device 90 may include, but are not limited to: one or more processors or processing units 906, a system memory 910, and a bus 908 that couples the various system components (including the system memory 910 and the processing unit 906).
Bus 908 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Computer device 90 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 90 and includes both volatile and nonvolatile media, removable and non-removable media.
The system Memory 910 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 911 and/or cache Memory 912. The computer device 90 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 913 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, and commonly referred to as a "hard disk drive"). Although not shown in FIG. 8, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 908 by one or more data media interfaces. System memory 910 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
Program/utility 914 having a set (at least one) of program modules 9140 may be stored, for example, in system memory 910, such program modules 9140 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of these examples may comprise an implementation of a network environment. Program modules 9140 generally perform the functions and/or methods of embodiments described herein.
Computer device 90 may also communicate with one or more external devices 10 (e.g., keyboard, pointing device, display 100, etc.), and may also communicate with one or more devices that enable a user to interact with the terminal device 90, and/or with any devices (e.g., Network card, modem, etc.) that enable the computer device 90 to communicate with one or more other computing devices.this communication may be via input/output (I/O) interface 902. moreover, computer device 90 may also communicate with one or more networks (e.g., local Area Network (L ocean Network; L AN) Wide Area Network (WAN) and/or a public Network such as the Internet) via Network adapter 900. As shown in FIG. 8, Network adapter 900 communicates with other modules of computer device 90 via bus 908. it should be understood that, although not shown in FIG. 8, other hardware and/or software modules may be used in conjunction with computer device 90, including, but not limited to, redundant micro-drive devices, redundant array drive systems, RAID drive systems, and the like.
The processing unit 906 executes various functional applications and data processing by executing programs stored in the system memory 910, for example, implementing the video content-based information embedding method mentioned in the foregoing embodiments.
In order to implement the foregoing embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video content-based information embedding method as described in the foregoing embodiments.
In order to implement the foregoing embodiments, the present application also proposes a computer program product, wherein when the instructions in the computer program product are executed by a processor, the information implantation method based on video content as described in the foregoing embodiments is implemented.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An information embedding method based on video content is characterized by comprising the following steps:
determining a target area corresponding to a target object in a video, and dividing the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block;
and coding the related information of the original pixel blocks according to the value-added information of the target object according to a preset coding rule, and generating a plurality of coding pixel blocks containing the value-added information so that when the target device scans a target area corresponding to the target object, the coding pixel blocks are analyzed to obtain the value-added information of the target object.
2. The method of claim 1, wherein before said dividing said target region into a plurality of original pixel blocks according to a preset unit pixel block size, further comprising:
and determining the size of the unit pixel block according to the size of the value-added information of the target object and the size of the target area.
3. The method of claim 1, wherein before said dividing said target region into a plurality of original pixel blocks according to a preset unit pixel block size, further comprising:
and determining the size of the unit pixel block according to the precision of the camera module of the target equipment.
4. The method according to claim 1, wherein said encoding information related to the original pixel blocks according to the value-added information of the target object according to a preset encoding rule to generate a plurality of encoded pixel blocks containing the value-added information comprises:
determining a color change encoding rule corresponding to the color information of each original pixel block;
performing data change on the color information of each original pixel block according to the color change coding rule to generate first color information;
and coding the first color information according to the value-added information of the target object to generate a plurality of coding pixel blocks with second color information.
5. The method of claim 4, wherein the data-varying the color information of each original pixel block according to the color-varying encoding rule to generate first color information comprises:
when the number of unit coding information corresponding to the color information of each original pixel block is 1, performing data change on the color information of each original pixel block to an even number to obtain first color information;
when the number of unit coding information corresponding to the color information of each original pixel block is N, performing data transformation on the color information of each original pixel block to 2NObtaining the first color information by integer multiple of the first color information, wherein N is a positive integer.
6. The method according to claim 1, wherein said encoding information related to the original pixel blocks according to the value-added information of the target object according to a preset encoding rule to generate a plurality of encoded pixel blocks containing the value-added information comprises:
determining a frequency change encoding rule corresponding to an alpha channel value for each original pixel block;
determining an average pixel value for each original pixel block;
and coding the first alpha channel value of each original pixel block according to the frequency change coding rule, the average pixel value of each original pixel block and the value-added information of the target object to generate a plurality of coding pixel blocks with second alpha channel values.
7. An information embedding apparatus based on video content, comprising:
the device comprises a dividing module, a judging module and a judging module, wherein the dividing module is used for determining a target area corresponding to a target object in a video and dividing the target area into a plurality of original pixel blocks according to the size of a preset unit pixel block;
and the coding module is used for coding the related information of the original pixel blocks according to the value-added information of the target object according to a preset coding rule to generate a plurality of coding pixel blocks containing the value-added information, so that when the target device scans a target area corresponding to the target object, the coding pixel blocks are analyzed to obtain the value-added information of the target object.
8. The information implanting device of claim 7, wherein the partitioning module is further configured to:
before the target area is divided into a plurality of original pixel blocks according to the size of a preset unit pixel block, the size of the unit pixel block is determined according to the size of the value-added information of the target object and the size of the target area.
9. A computer device comprising a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the video content-based information implanting method according to any one of claims 1 to 6.
10. A non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor implements the video content-based information embedding method according to any one of claims 1 to 6.
CN201910002096.8A 2019-01-02 2019-01-02 Information implantation method and device based on video content and storage medium Active CN111405349B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910002096.8A CN111405349B (en) 2019-01-02 2019-01-02 Information implantation method and device based on video content and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910002096.8A CN111405349B (en) 2019-01-02 2019-01-02 Information implantation method and device based on video content and storage medium

Publications (2)

Publication Number Publication Date
CN111405349A true CN111405349A (en) 2020-07-10
CN111405349B CN111405349B (en) 2022-05-13

Family

ID=71428221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910002096.8A Active CN111405349B (en) 2019-01-02 2019-01-02 Information implantation method and device based on video content and storage medium

Country Status (1)

Country Link
CN (1) CN111405349B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040198279A1 (en) * 2002-12-16 2004-10-07 Nokia Corporation Broadcast media bookmarks
CN1655616A (en) * 2005-02-25 2005-08-17 吉林大学 Audio-embedded video frequency in audio-video mixed signal synchronous compression and method of extraction
CN101304522A (en) * 2008-06-20 2008-11-12 中国民航大学 Considerable information hide method using JPEG2000 compression image as carrier
JP2008301010A (en) * 2007-05-30 2008-12-11 Mitsubishi Electric Corp Image processor and method, and image display device
CN101960773A (en) * 2007-08-17 2011-01-26 隍科技有限公司 General data hiding framework using parity for minimal switching
CN104184923A (en) * 2014-08-27 2014-12-03 天津三星电子有限公司 System and method used for retrieving figure information in video
CN104202501A (en) * 2014-08-29 2014-12-10 西安空间无线电技术研究所 Method for performing information carrying and transmission in image
CN105303510A (en) * 2014-07-31 2016-02-03 国际商业机器公司 Method and device for hiding information in image
CN105933710A (en) * 2016-05-20 2016-09-07 中国人民解放军信息工程大学 Information transmission method and information transmission system
CN106780283A (en) * 2016-12-27 2017-05-31 Tcl集团股份有限公司 Steganography information coding method and device and steganography information decoding method and device
CN107205155A (en) * 2017-05-24 2017-09-26 上海交通大学 Quick Response Code based on human eye vision fusion characteristics on spatial domain hides picture system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040198279A1 (en) * 2002-12-16 2004-10-07 Nokia Corporation Broadcast media bookmarks
CN1655616A (en) * 2005-02-25 2005-08-17 吉林大学 Audio-embedded video frequency in audio-video mixed signal synchronous compression and method of extraction
JP2008301010A (en) * 2007-05-30 2008-12-11 Mitsubishi Electric Corp Image processor and method, and image display device
CN101960773A (en) * 2007-08-17 2011-01-26 隍科技有限公司 General data hiding framework using parity for minimal switching
CN101304522A (en) * 2008-06-20 2008-11-12 中国民航大学 Considerable information hide method using JPEG2000 compression image as carrier
CN105303510A (en) * 2014-07-31 2016-02-03 国际商业机器公司 Method and device for hiding information in image
CN104184923A (en) * 2014-08-27 2014-12-03 天津三星电子有限公司 System and method used for retrieving figure information in video
CN104202501A (en) * 2014-08-29 2014-12-10 西安空间无线电技术研究所 Method for performing information carrying and transmission in image
CN105933710A (en) * 2016-05-20 2016-09-07 中国人民解放军信息工程大学 Information transmission method and information transmission system
CN106780283A (en) * 2016-12-27 2017-05-31 Tcl集团股份有限公司 Steganography information coding method and device and steganography information decoding method and device
CN107205155A (en) * 2017-05-24 2017-09-26 上海交通大学 Quick Response Code based on human eye vision fusion characteristics on spatial domain hides picture system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G PRABAKARAN: "A modified secure digital image steganography based on Discrete Wavelet Transform", 《2012 INTERNATIONAL CONFERENCE ON COMPUTING, ELECTRONICS AND ELECTRICAL TECHNOLOGIES》 *
苏鹏涛: "基于alpha叠加的视频信息隐藏算法", 《信息技术与网络安全》 *

Also Published As

Publication number Publication date
CN111405349B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
JP2020174374A (en) Digital image recompression
JP2968582B2 (en) Method and apparatus for processing digital data
US20120189197A1 (en) Device, system, and method for indexing digital image frames
CN106717007B (en) Cloud end streaming media server
CN102156611A (en) Method and apparatus for creating animation message
CN112789650A (en) Detecting semi-transparent image watermarks
US20180184096A1 (en) Method and apparatus for encoding and decoding lists of pixels
US20180232858A1 (en) Image compression method, image reconstruction method, image compression device, image reconstruction device, and image compression and reconstruction system
CN104023181A (en) Information processing method and device
CN114286172B (en) Data processing method and device
CN115358911A (en) Screen watermark generation method, device, equipment and computer readable storage medium
CN111405349B (en) Information implantation method and device based on video content and storage medium
CN108668169B (en) Image information processing method and device, and storage medium
KR20150077294A (en) Adaptive depth offset compression
CN109451318B (en) Method, apparatus, electronic device and storage medium for facilitating VR video encoding
US20110221775A1 (en) Method for transforming displaying images
CN108668170B (en) Image information processing method and device, and storage medium
US9273955B2 (en) Three-dimensional data acquisition
CN101065760B (en) System and method for processing image data
CN101903907A (en) Edge directed image processing
KR102531605B1 (en) Hybrid block based compression
US20190035046A1 (en) Image processing device, image processing method, and program
CN113992951A (en) Screen projection method, projector and terminal equipment
CN113344161A (en) Dynamic QR code generation method and device, computer equipment and storage medium
CN111526366A (en) Image processing method, image processing apparatus, image capturing device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant