CN105991999B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN105991999B
CN105991999B CN201510082875.5A CN201510082875A CN105991999B CN 105991999 B CN105991999 B CN 105991999B CN 201510082875 A CN201510082875 A CN 201510082875A CN 105991999 B CN105991999 B CN 105991999B
Authority
CN
China
Prior art keywords
image
frame image
image identification
code
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510082875.5A
Other languages
Chinese (zh)
Other versions
CN105991999A (en
Inventor
辛晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201510082875.5A priority Critical patent/CN105991999B/en
Publication of CN105991999A publication Critical patent/CN105991999A/en
Application granted granted Critical
Publication of CN105991999B publication Critical patent/CN105991999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an information processing method and electronic equipment, wherein the method comprises the following steps: acquiring images of a video stream presentation area to obtain at least two frames of images; identifying each frame of image in the at least two frames of images to obtain at least two image identification codes presented in the area; analyzing at least two image identification codes to obtain a coding symbol included by each image identification code, obtaining a code word corresponding to the coding symbol based on a coding strategy of the coding symbol, and combining the code words corresponding to the coding symbols to obtain a code word sequence corresponding to each image identification code; and decoding the code word sequences corresponding to the at least two image identification codes to obtain target data. The invention can support rapid mass data transmission between electronic devices and has low implementation cost.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to information processing technologies, and in particular, to an information processing method and an electronic device.
Background
At present, the data volume of communication of electronic equipment is increasingly large, two-dimension codes become widely used technologies due to the characteristics of convenience in use and the like, in an equivalent space, the data volume borne by a single two-dimension code cannot meet the transmission work of larger and faster data, and the electronic equipment is possibly not provided with a wireless fidelity (WiFi) communication module or a Bluetooth communication module due to the limitation of cost or use scenes; this results in great inconvenience when the electronic device needs to perform a fast and massive data transfer with other electronic devices.
Disclosure of Invention
Embodiments of the present invention provide an information processing method and an electronic device, which can support fast large-scale data transmission between electronic devices and have low implementation cost.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an information processing method, which is applied to electronic equipment and comprises the following steps:
acquiring images of a video stream presentation area to obtain at least two frames of images;
identifying each frame of image in the at least two frames of images to obtain at least two image identification codes presented by the area;
analyzing the at least two image identification codes to obtain a coding symbol included by each image identification code, obtaining a code word corresponding to the coding symbol based on a coding strategy of the coding symbol, and combining the code words corresponding to the coding symbols to obtain a code word sequence corresponding to each image identification code;
and decoding the code word sequences corresponding to the at least two image identification codes to obtain target data.
As an embodiment, the image capturing the video stream presentation area to obtain at least two frames of images includes:
identifying each frame of image presented by the video stream presentation area;
when the frame image presented by the video stream presenting area is identified to comprise the initial image identification code, starting image acquisition until,
and ending image acquisition when the frame image presented by the video stream presenting area is identified to comprise the ending image identification code.
As an embodiment, the identifying each of the at least two frames of images to obtain at least two image identification codes presented in the area includes:
identifying the position of an image identification code in a first frame image of the at least two frame images to obtain an image identification code positioning area, wherein the image identification code positioning area identifies areas in the first frame image, in which image identification codes are distributed;
and identifying a corresponding image identification code in a frame image after the first frame image based on the image identification code locating area.
As an embodiment, the decoding the codeword sequences corresponding to the at least two image identification codes to obtain target data includes:
comparing the coding symbols in the ith frame image with the coding symbols of the M areas in the (i + j) th frame image;
determining a code word sequence corresponding to the i + j frame image based on the comparison result and the code word sequence corresponding to the i frame image;
decoding codeword sequences corresponding to the first frame image to the Nth frame image to obtain the target data; wherein the content of the first and second substances,
i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and i + j is less than or equal to N, N is the number of acquired frame images, and M is an integer greater than or equal to 2.
As an embodiment, the determining, based on the alignment result and the codeword sequence corresponding to the i frame image, the codeword sequence corresponding to the i + j frame image includes:
when the comparison result represents that the coding symbol of the kth region of the ith frame image is consistent with the coding symbol of the kth region of the i + j frame image, determining the code word corresponding to the coding symbol of the kth region of the ith frame image as the code word corresponding to the coding symbol of the kth region of the i + j frame image;
when the comparison result represents that the coding symbol of the kth region of the ith frame image is inconsistent with the coding symbol of the kth region of the (i + j) th frame image, determining a code word corresponding to the coding symbol of the kth region of the (i + j) th frame image based on a coding strategy of the coding symbol;
combining the obtained code words to obtain a code word sequence corresponding to the i + j frame image; wherein the value of k is more than or equal to 1 and less than or equal to M.
As an embodiment, the method further comprises:
analyzing the target data and extracting at least one instruction from the target data;
extracting target content from the target data based on the instruction, and/or extracting target content from data stored by the electronic equipment;
and executing the operation indicated by the instruction by using the extracted target content.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
the acquisition unit is used for acquiring images of the video stream presentation area to obtain at least two frames of images;
the identification unit is used for identifying each frame of image in the at least two frames of images to obtain at least two image identification codes presented by the area;
the first analysis unit is used for analyzing the at least two image identification codes to obtain a coding symbol included by each image identification code, obtaining a code word corresponding to the coding symbol based on a coding strategy of the coding symbol, and combining the code words corresponding to the coding symbol to obtain a code word sequence corresponding to each image identification code;
and the decoding unit is used for decoding the code word sequences corresponding to the at least two image identification codes to obtain target data.
As an embodiment, the acquisition unit is further configured to identify each frame of image presented by the video stream presentation area;
the acquisition unit is further used for triggering the acquisition unit to start image acquisition when the frame image presented in the video stream presentation area is identified to include the initial image identification code,
and triggering the acquisition unit to finish image acquisition when the frame image presented in the video stream presentation area is identified to include the image finishing identification code.
As an embodiment, the identifying unit is further configured to identify a position of an image identifier in a first frame image of the at least two frame images to obtain an image identifier locating area, where the image identifier locating area identifies an area in the first frame image where an image identifier is distributed;
and identifying a corresponding image identification code in a frame image after the first frame image based on the image identification code locating area.
As an embodiment, the decoding unit includes:
the comparison module is used for comparing the coding symbols in the ith frame image with the coding symbols of the M areas in the (i + j) th frame image;
a determining module, configured to determine a codeword sequence corresponding to the i + j frame image based on the comparison result and the codeword sequence corresponding to the i frame image;
the decoding module is used for decoding codeword sequences corresponding to the first frame image to the Nth frame image to obtain the target data;
wherein i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and i + j is less than or equal to N, N is the number of acquired frame images, and M is an integer greater than or equal to 2.
As an embodiment, when the comparison result indicates that the coding symbol of the kth region of the ith frame image is consistent with the coding symbol of the kth region of the i + j frame image, the determining module is further configured to determine a codeword corresponding to the coding symbol of the kth region of the ith frame image as a codeword corresponding to the coding symbol of the kth region of the i + j frame image;
when the comparison result represents that the coding symbol of the kth region of the ith frame image is inconsistent with the coding symbol of the kth region of the (i + j) th frame image, determining a code word corresponding to the coding symbol of the kth region of the (i + j) th frame image based on a coding strategy of the coding symbol;
combining the obtained code words to obtain a code word sequence corresponding to the i + j frame image; wherein the value of k is more than or equal to 1 and less than or equal to M.
As an embodiment, the electronic device further comprises:
the second analysis unit is used for analyzing the target data and extracting at least one instruction from the target data;
the extracting unit is used for extracting target content from the target data based on the instruction and/or extracting the target content from the data stored in the electronic equipment;
a content operating unit configured to execute the operation indicated by the instruction using the target content extracted by the extracting unit.
In the embodiment of the invention, the video stream presentation area carries the data to be transmitted by a series of (at least two) image identification codes, each image identification code carries a section of data to be transmitted, and the image identification codes are presented in the form of video stream, thereby breaking through the problem that the data capacity is limited when a single image identification code is used for carrying and transmitting the data; when data needs to be acquired, a video stream display area can be subjected to image acquisition to obtain a series of image identification codes, the series of image identification codes are decoded to obtain a transmitted complete data segment, the transmitted data segment has no limit on capacity, communication modes such as Bluetooth and WiFi are not required, and implementation cost is reduced.
Drawings
FIG. 1a is a first schematic flow chart illustrating an implementation of an information processing method according to an embodiment of the present invention;
FIG. 1b is a diagram illustrating a first application scenario of an information processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a second implementation flow of the information processing method according to the embodiment of the present invention;
FIG. 3a is a schematic view of a third implementation flow of an information processing method according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of an application scenario of the information processing method according to the embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a fourth implementation flow of the information processing method according to the embodiment of the present invention;
FIG. 5a is a schematic diagram of an implementation flow of an information processing method according to an embodiment of the present invention;
FIG. 5b is a schematic diagram of an application scenario of the information processing method in the embodiment of the present invention;
fig. 5c is a schematic view of an application scenario of the information processing method in the embodiment of the present invention;
FIG. 6a is a first schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 6b is a schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
Example one
The embodiment describes an information processing method, which can be applied to electronic devices such as smart phones and tablet computers, and the electronic devices often have a need for transmitting large-capacity data (such as advertisements and electronic books) in practical application, and a traditional data transmission mode depends on bluetooth and WiFi, and the implementation cost of data transmission is high; if a single image identification code (such as a two-dimensional code, a bar code and the like) is used, the use requirement of actually transmitted data is difficult to meet due to the limited data capacity carried by the single image identification code;
the information processing method described in this embodiment can reduce the implementation cost of data transmission and can perform large-capacity data transmission, in this embodiment, the electronic device 1 needs to acquire target data from the electronic device 2 (for example, advertisement and electronic book are taken as an example for explanation, when the electronic device 1 held by the user 1 needs to acquire the target data from the electronic device 2 held by the user 2, the user 2 can trigger the electronic device 2 to pre-process the data that the user 1 needs to acquire, including loading the data in a series of (at least two) image identification codes (in this embodiment, the example of transmitting the large-capacity data is taken as an example for explanation, so the number of the image identification codes that carry the data is often much larger than two, and in practical application, hundreds, thousands or tens of thousands of units are used as the number of transmission of the image identification codes), and presenting the image identification codes in the, so that the electronic apparatus 1 scans the image display area of the electronic apparatus 2; the image identification code can be obtained by a two-dimensional code, a bar code or other image coding forms, and the specific form of the image identification code is not limited in the embodiment; the following explains the processing of the electronic apparatus 1:
the information processing method described in this embodiment is applied to the electronic device 1, and as shown in fig. 1a, includes the following steps:
step 101, image acquisition is performed on a video stream presentation area to obtain at least two frames of images.
The video stream presenting area refers to an area of the display unit of the electronic device 2 presenting the image identification code, and the user 1 can adjust the pose of the electronic device 1 to enable the display unit of the electronic device 2 to be located in the image acquisition area of the electronic device 1; at least two frames of images bear image identification codes corresponding to all data segments of the target data; here, the scanning frequency of the electronic device 1 for the video stream presentation area is greater than or equal to the frame rate of the electronic device 2 for presenting the video stream, so that the electronic device 1 completely acquires the image identifier presented by the electronic device 2.
Step 102, each frame of image in the at least two frames of images is identified, and at least two image identification codes (for example, two-dimensional codes) presented by the area are obtained.
Where each of the at least two frames of images carries an image identification code,
step 103, analyzing the at least two image identification codes to obtain a coding symbol included in each image identification code.
For example, when the image identification code is a two-dimensional code, the code symbol is a basic identification unit of the two-dimensional code, that is, a black-and-white image symbol.
And 104, obtaining a code word corresponding to the coding symbol based on the coding strategy of the coding symbol, and combining the code words corresponding to the coding symbol to obtain a code word sequence corresponding to each image identification code.
One codeword corresponds to one code symbol, e.g., a black code symbol may correspond to codeword 1 and a white code symbol may correspond to codeword 0; the encoding strategy used in step 104 is related to the type of image identification code used, for example, when the image identification code is a two-dimensional code, a Quick Response (QR) encoding strategy is used, in which one bit of data in the target data segment can obtain an 8-bit codeword sequence based on the QR encoding strategy.
And 105, decoding the code word sequences corresponding to the at least two image identification codes to obtain target data.
Respectively decoding codeword sequences corresponding to the at least two image identification codes to obtain data segments, and combining the obtained data segments to obtain target data; or decoding the code word sequence corresponding to the at least two image identification codes to obtain the target data.
As shown in fig. 1b, in an application scenario, when a user 1 needs to hold an electronic device 1 to obtain an electronic book file (corresponding to target data) from an electronic device 2 held by the user 2, the user 2 triggers the electronic device 2 to preprocess the electronic book file, a video stream is formed by a series of image identification codes bearing file data and played on a display unit, the user 1 holds the electronic device 1 and adjusts a pose of the electronic device 1, so that the display unit of the electronic device 2 is located within an image acquisition area of the electronic device 1, and the electronic device 1 is triggered to execute the above steps, so that the image identification codes can be extracted from the video stream, and the target data can be analyzed based on the image identification codes;
the video stream display area bears data to be transmitted by a series of (at least two) image identification codes, each image identification code bears a section of data to be transmitted, and the image identification codes are displayed in a video stream mode, so that the problem that the data capacity is limited when a single image identification code is utilized to bear the data and transmit the data is solved; when data needs to be acquired, a video stream display area can be subjected to image acquisition to obtain a series of image identification codes, the series of image identification codes are decoded to obtain a transmitted complete data segment, the transmitted data segment has no limit on capacity, communication modes such as Bluetooth and WiFi are not required, and implementation cost is reduced.
Example two
The embodiment describes an information processing method, which can be applied to electronic devices such as smart phones and tablet computers, and the electronic devices often have a requirement for transmitting large-capacity data (such as advertisements and electronic books) in practical application, and a traditional data transmission mode depends on bluetooth and WiFi, and the implementation cost of data transmission is high; if a single image identification code (such as a two-dimensional code, a bar code and the like) is used, the use requirement of actually transmitted data is difficult to meet due to the limited data capacity carried by the single image identification code;
the information processing method described in this embodiment can reduce the implementation cost of data transmission, and can perform large-capacity data transmission, and in this embodiment, the electronic device 1 needs to acquire target data (such as advertisements and electronic books) from the electronic device 2 as an example for explanation; when the electronic device 1 held by the user 1 needs to acquire target data from the electronic device 2 held by the user 2, the user 2 may trigger the electronic device 2 to pre-process the data that the user 1 needs to acquire, including loading the data in a series of (at least two) image identification codes (in this embodiment, a large-capacity data transmission example is used for explanation, so the number of the image identification codes that load the data is often much larger than two, and in practical application, hundreds, thousands, or tens of thousands are used as a number unit for transmission of the image identification codes), and presenting the image identification codes in a video stream form, so that the electronic device 1 scans an image display area of the electronic device 2; the image identification code can be obtained by a two-dimensional code, a bar code or other image coding forms, and the specific form of the image identification code is not limited in the embodiment; the following explains the processing of the electronic apparatus 1:
the information processing method described in the present embodiment is applied to the electronic device 1, and as shown in fig. 2, includes the following steps:
step 201, identifying each frame of image presented in the video stream presenting area frame by frame.
Step 202, when it is recognized that the frame image presented in the video stream presentation area includes the start image identification code, starting image acquisition until it is recognized that the frame image presented in the video stream presentation area includes the end image identification code, ending image acquisition.
The image capture in step 202 is different from the frame-by-frame recognition in step 201 in that the electronic device 1 recognizes the frame image obtained by the image capture in step 202 as a frame image bearing an image identification code of the target data.
In this embodiment, the electronic device 2 carries a start image identification code (representing that the subsequently presented image identification code carries a data segment of target data) and an end image identification code (representing that the data segment of the presentation target data is no longer presented) in a series of image identification codes presented in the form of a video stream; that is to say, the image identification code presented in the form of video stream between the start image identification code and the end image identification code of the electronic device 2 carries the data segment of the target data, so that the processing efficiency of the electronic device 1 for analyzing the image identification code can be improved; so that the electronic device 1 can find the image identification code of the start data segment carrying the target data and the image identification code of the end data segment carrying the target data in time, the electronic device 2 can present the start image identification code and the end image identification code at a static or different rate from displaying the image identification codes of the data segments carrying the target data, for example, the duration of presenting the start image identification code exceeds 2 seconds, so that the electronic device 1 knows that the subsequent electronic device 2 will present the image identification code of the data segment carrying the target data in the form of a video stream, and the duration of presenting the end image identification code exceeds 2 seconds, so that the electronic device 1 knows that the image identification codes are completely presented.
The video stream presenting area refers to an area of the display unit of the electronic device 2 presenting the image identification code, and the user 1 can adjust the pose of the electronic device 1 to enable the display unit of the electronic device 2 to be located in the image acquisition area of the electronic device 1; at least two frames of images bear image identification codes corresponding to all data segments of the target data; here, the scanning frequency of the electronic device 1 for the video stream presentation area is greater than or equal to the frame rate of the electronic device 2 for presenting the video stream, so that the electronic device 1 completely acquires the image identifier presented by the electronic device 2.
Step 203, each frame of image in the at least two frames of images is identified, and at least two image identification codes (for example, two-dimensional codes) presented by the area are obtained.
Where each of the at least two frames of images carries an image identification code,
step 204, analyzing the at least two image identification codes to obtain a coding symbol included in each image identification code.
For example, when the image identification code is a two-dimensional code, the code symbol is a basic identification unit of the two-dimensional code, that is, a black-and-white image symbol.
Step 205, obtaining a code word corresponding to the code symbol based on the coding strategy of the code symbol, and combining the code words corresponding to the code symbol to obtain a code word sequence corresponding to each image identification code.
One codeword corresponds to one code symbol, e.g., a black code symbol may correspond to codeword 1 and a white code symbol may correspond to codeword 0; the encoding strategy used in step 204 is related to the type of image identification code used, for example, when the image identification code is a two-dimensional code, a Quick Response (QR) encoding strategy is used, in which one bit of data in the target data segment can obtain an 8-bit codeword sequence based on the QR encoding strategy.
And step 206, decoding the codeword sequences corresponding to the at least two image identification codes to obtain target data.
Respectively decoding codeword sequences corresponding to the at least two image identification codes to obtain data segments, and combining the obtained data segments to obtain target data; or decoding the code word sequence corresponding to the at least two image identification codes to obtain the target data.
In an application scenario, when the user 1 needs to hold the electronic device 1 to obtain an electronic book file (corresponding to target data) from the electronic device 2 held by the user 2, the user 2 triggers the electronic device 2 to pre-process the electronic book file, a video stream is formed by a series of image identification codes bearing file data and played on a display unit, a user 1 holds the electronic equipment 1 and adjusts the pose of the electronic equipment 1 to enable the display unit of the electronic equipment 2 to be positioned in the range of an image acquisition area of the electronic equipment 1, the electronic equipment 2 firstly presents a starting image identification code, to trigger the electronic device 1 to perform an image acquisition process, the electronic device 1 acquires a display area of the electronic device 2, identifying the image identification code of the data segment bearing the target data from the acquired image, and stopping image acquisition until the image identification code is identified; therefore, the image identification code can be extracted from the video stream, and the target data can be analyzed based on the image identification code;
in this embodiment, the electronic device 1 can perform image acquisition on the display area of the electronic device 2 based on the identification of the start image identification code and the end image identification code, so as to extract the image identification code of the data segment carrying the target data from the acquired image; the electronic equipment 1 does not need to be manually controlled by a user to start image acquisition and finish image acquisition, so that the operation is more intelligent, and the user experience is improved.
EXAMPLE III
The embodiment describes an information processing method, which can be applied to electronic devices such as smart phones and tablet computers, and the electronic devices often have a requirement for transmitting large-capacity data (such as advertisements and electronic books) in practical application, and a traditional data transmission mode depends on bluetooth and WiFi, and the implementation cost of data transmission is high; if a single image identification code (such as a two-dimensional code, a bar code and the like) is used, the use requirement of actually transmitted data is difficult to meet due to the limited data capacity carried by the single image identification code;
the information processing method described in this embodiment can reduce the implementation cost of data transmission, and can perform large-capacity data transmission, and in this embodiment, the electronic device 1 needs to acquire target data (such as advertisements and electronic books) from the electronic device 2 as an example for explanation; when the electronic device 1 held by the user 1 needs to acquire target data from the electronic device 2 held by the user 2, the user 2 may trigger the electronic device 2 to pre-process the data that the user 1 needs to acquire, including loading the data in a series of (at least two) image identification codes (in this embodiment, a large-capacity data transmission example is used for explanation, so the number of the image identification codes that load the data is often much larger than two, and in practical application, hundreds, thousands, or tens of thousands are used as a number unit for transmission of the image identification codes), and presenting the image identification codes in a video stream form, so that the electronic device 1 scans an image display area of the electronic device 2; the image identification code can be obtained by a two-dimensional code, a bar code or other image coding forms, and the specific form of the image identification code is not limited in the embodiment; the following explains the processing of the electronic apparatus 1:
the information processing method described in the present embodiment is applied to the electronic device 1, and as shown in fig. 3a, includes the following steps:
step 301, identifying the position of the image identification code in the first frame image of the at least two frame images to obtain an image identification code positioning area, wherein the image identification code positioning area identifies the area in the first frame image in which the image identification code is distributed.
And 302, positioning an area based on the image identification code, and identifying a corresponding image identification code in a frame image after the first frame image.
The video stream presenting area refers to an area of the display unit of the electronic device 2 presenting the image identification code, and the user 1 can adjust the pose of the electronic device 1 to enable the display unit of the electronic device 2 to be located in the image acquisition area of the electronic device 1; at least two frames of images bear image identification codes corresponding to all data segments of the target data; here, the scanning frequency of the electronic device 1 for the video stream presenting area is greater than or equal to the frame rate of the electronic device 2 for presenting the video stream, so that the electronic device 1 completely acquires the image identification code; also, as shown in fig. 3b, the electronic device 2 always displays the image identification code in a fixed area of the display unit (i.e., an image identification code locating area), thus facilitating the electronic device 1 to extract the image identification code from the captured image.
Step 303, identifying each frame of image in the at least two frames of images to obtain at least two image identification codes (for example, two-dimensional codes) presented by the area.
Where each of the at least two frames of images carries an image identification code,
step 304, analyzing the at least two image identification codes to obtain a coding symbol included in each image identification code.
For example, when the image identification code is a two-dimensional code, the code symbol is a basic identification unit of the two-dimensional code, that is, a black-and-white image symbol.
And 305, obtaining a code word corresponding to the coding symbol based on the coding strategy of the coding symbol, and combining the code words corresponding to the coding symbol to obtain a code word sequence corresponding to each image identification code.
One codeword corresponds to one code symbol, e.g., a black code symbol may correspond to codeword 1 and a white code symbol may correspond to codeword 0; the encoding strategy used in step 304 is related to the type of image identification code used, for example, when the image identification code is a two-dimensional code, a Quick Response (QR) encoding strategy is used, in which one bit of data in the target data segment can obtain an 8-bit codeword sequence based on the QR encoding strategy.
And step 306, decoding the codeword sequences corresponding to the at least two image identification codes to obtain target data.
Respectively decoding codeword sequences corresponding to the at least two image identification codes to obtain data segments, and combining the obtained data segments to obtain target data; or decoding the code word sequence corresponding to the at least two image identification codes to obtain the target data.
In an application scenario, as shown in fig. 3b, when a user 1 needs to hold an electronic device 1 to acquire an electronic book file (corresponding to target data) from an electronic device 2 held by the user 2, the user 2 triggers the electronic device 2 to pre-process the electronic book file to form a video stream with a series of image identification codes bearing file data, and plays the video stream on a display unit, the user 1 holds the electronic device 1 and adjusts the pose of the electronic device 1 to enable the display unit of the electronic device 2 to be located within an image acquisition area of the electronic device 1, triggers the electronic device 1 to perform the above steps, determines from a first frame of acquired image so as to be able to extract the image identification code from the video stream, and resolves the target data based on the image identification code.
In the embodiment, the electronic device 1 is based on the image identification code positioning area, the collected image does not need to be identified again subsequently to extract the image identification code, and the image identification code is directly extracted from the image identification code positioning area, so that the processing efficiency is improved.
Example four
The embodiment describes an information processing method, which can be applied to electronic devices such as smart phones and tablet computers, and the electronic devices often have a requirement for transmitting large-capacity data (such as advertisements and electronic books) in practical application, and a traditional data transmission mode depends on bluetooth and WiFi, and the implementation cost of data transmission is high; if a single image identification code (such as a two-dimensional code, a bar code and the like) is used, the use requirement of actually transmitted data is difficult to meet due to the limited data capacity carried by the single image identification code;
the information processing method described in this embodiment can reduce the implementation cost of data transmission, and can perform large-capacity data transmission, and in this embodiment, the electronic device 1 needs to acquire target data (such as advertisements and electronic books) from the electronic device 2 as an example for explanation; when the electronic device 1 held by the user 1 needs to acquire target data from the electronic device 2 held by the user 2, the user 2 may trigger the electronic device 2 to pre-process the data that the user 1 needs to acquire, including loading the data in a series of (at least two) image identification codes (in this embodiment, a large-capacity data transmission example is used for explanation, so the number of the image identification codes that load the data is often much larger than two, and in practical application, hundreds, thousands, or tens of thousands are used as a number unit for transmission of the image identification codes), and presenting the image identification codes in a video stream form, so that the electronic device 1 scans an image display area of the electronic device 2; the image identification code can be obtained by a two-dimensional code, a bar code or other image coding forms, and the specific form of the image identification code is not limited in the embodiment; the following explains the processing of the electronic apparatus 1:
the information processing method described in the present embodiment is applied to the electronic device 1, and as shown in fig. 4, includes the following steps:
step 401, identifying each frame of image presented in the video stream presenting area frame by frame.
Step 402, when it is recognized that the frame image presented in the video stream presentation area includes the start image identification code, starting image acquisition until the image acquisition is finished when it is recognized that the frame image presented in the video stream presentation area includes the end image identification code.
When image acquisition is started, identifying the position of an image identification code in a first frame image of the at least two frame images to obtain an image identification code positioning area, wherein the image identification code positioning area identifies areas in the first frame image, in which image identification codes are distributed; and locating an area based on the image identification code, and identifying a corresponding image identification code in a frame image subsequent to the first frame image.
The video stream presenting area refers to an area of the display unit of the electronic device 2 presenting the image identification code, and the user 1 can adjust the pose of the electronic device 1 to enable the display unit of the electronic device 2 to be located in the image acquisition area of the electronic device 1; at least two frames of images bear image identification codes corresponding to all data segments of the target data; here, the scanning frequency of the electronic device 1 for the video stream presenting area is greater than or equal to the frame rate of the electronic device 2 for presenting the video stream, so that the electronic device 1 completely acquires the image identification code; also, as shown in fig. 3b, the electronic device 2 always displays the image identification code in a fixed area of the display unit (i.e., an image identification code locating area), thus facilitating the electronic device 1 to extract the image identification code from the captured image.
Step 403, each frame of image in the at least two frames of images is identified, and at least two image identification codes (for example, two-dimensional codes) presented by the area are obtained.
Where each of the at least two frames of images carries an image identification code,
step 404, analyzing the at least two image identification codes to obtain a coding symbol included in each image identification code.
For example, when the image identification code is a two-dimensional code, the code symbol is a basic identification unit of the two-dimensional code, that is, a black-and-white image symbol.
Step 405, obtaining a code word corresponding to the code symbol based on the coding strategy of the code symbol, and combining the code words corresponding to the code symbol to obtain a code word sequence corresponding to each image identification code.
One codeword corresponds to one code symbol, e.g., a black code symbol may correspond to codeword 1 and a white code symbol may correspond to codeword 0; the encoding strategy used in step 404 is related to the type of image identification code used, for example, when the image identification code is a two-dimensional code, a Quick Response (QR) encoding strategy is used, in which one bit of data in the target data segment can obtain an 8-bit codeword sequence based on the QR encoding strategy.
Step 406, comparing the coding symbols in the ith frame image with the coding symbols of the M regions in the (i + j) th frame image.
Step 407, determining a codeword sequence corresponding to the i + j frame image based on the comparison result and the codeword sequence corresponding to the i frame image.
Wherein i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and i + j is less than or equal to N, N is the number of acquired frame images, and M is an integer greater than or equal to 2.
When the comparison result represents that the coding symbol of the kth region of the ith frame image is consistent with the coding symbol of the kth region of the i + j frame image, determining the code word corresponding to the coding symbol of the kth region of the ith frame image as the code word corresponding to the coding symbol of the kth region of the i + j frame image;
when the comparison result represents that the coding symbol of the kth region of the ith frame image is inconsistent with the coding symbol of the kth region of the (i + j) th frame image, determining a code word corresponding to the coding symbol of the kth region of the (i + j) th frame image based on a coding strategy of the coding symbol;
combining the obtained code words to obtain a code word sequence corresponding to the i + j frame image; wherein the value of k is more than or equal to 1 and less than or equal to M.
Coding symbols of partial areas in two-dimensional codes of adjacent or spaced frame images (i frame image and i + j frame image) are possibly the same, and by comparing whether the coding symbols of the areas of the two frame images are the same or not, in the step 5, if the coding symbols are the same, a code word sequence of the coding symbols of the same area can be reused; the operation can be saved, and the code word sequences corresponding to the coding symbols in different areas are determined only when the code word sequences are different.
Step 408, decoding the codeword sequences corresponding to the first frame image to the nth frame image to obtain the target data
EXAMPLE five
The foregoing embodiment is described with the target data carrying file content (e.g. e-book, advertisement), and the like, and the present embodiment is described with the target data carrying instruction for instructing the electronic device to operate.
The embodiment describes an information processing method, which can be applied to electronic devices such as smart phones and tablet computers, and the electronic devices often have a need for transmitting large-capacity data (such as advertisements and electronic books) in practical application, and a traditional data transmission mode depends on bluetooth and WiFi, and the implementation cost of data transmission is high; if a single image identification code (such as a two-dimensional code, a bar code and the like) is used, the use requirement of actually transmitted data is difficult to meet due to the limited data capacity carried by the single image identification code;
the information processing method described in this embodiment can reduce the implementation cost of data transmission, and can perform large-capacity data transmission, and in this embodiment, the electronic device 1 needs to acquire target data from the electronic device 2 as an example; when the electronic device 1 held by the user 1 needs to acquire target data from the electronic device 2 held by the user 2, the user 2 may trigger the electronic device 2 to pre-process the data that the user 1 needs to acquire, including loading the data in a series of (at least two) image identification codes (in this embodiment, a large-capacity data transmission example is used for explanation, so the number of the image identification codes that load the data is often much larger than two, and in practical application, hundreds, thousands, or tens of thousands are used as a number unit for transmission of the image identification codes), and presenting the image identification codes in a video stream form, so that the electronic device 1 scans an image display area of the electronic device 2; the image identification code can be obtained by a two-dimensional code, a bar code or other image coding forms, and the specific form of the image identification code is not limited in the embodiment; the following explains the processing of the electronic apparatus 1:
the information processing method described in this embodiment is applied to the electronic device 1, and as shown in fig. 5a, includes the following steps:
step 501, image acquisition is performed on a video stream presentation area to obtain at least two frames of images.
The video stream presenting area refers to an area of the display unit of the electronic device 2 presenting the image identification code, and the user 1 can adjust the pose of the electronic device 1 to enable the display unit of the electronic device 2 to be located in the image acquisition area of the electronic device 1; at least two frames of images bear image identification codes corresponding to all data segments of the target data; here, the scanning frequency of the electronic device 1 for the video stream presentation area is greater than or equal to the frame rate of the electronic device 2 for presenting the video stream, so that the electronic device 1 completely acquires the image identifier presented by the electronic device 2.
Step 502, each frame of image in the at least two frames of images is identified, and at least two image identification codes (for example, two-dimensional codes) presented by the area are obtained.
Where each of the at least two frames of images carries an image identification code,
step 503, analyzing at least two image identification codes to obtain a code symbol included in each image identification code.
For example, when the image identification code is a two-dimensional code, the code symbol is a basic identification unit of the two-dimensional code, that is, a black-and-white image symbol.
Step 504, obtaining a code word corresponding to the code symbol based on a coding strategy of the code symbol, and combining the code words corresponding to the code symbol to obtain a code word sequence corresponding to each image identification code.
One codeword corresponds to one code symbol, e.g., a black code symbol may correspond to codeword 1 and a white code symbol may correspond to codeword 0; the encoding strategy used in step 504 is related to the type of image identification code used, for example, when the image identification code is a two-dimensional code, a QR encoding strategy is used, in which one bit of data in the target data segment can obtain an 8-bit codeword sequence based on the QR encoding strategy.
And 505, decoding the codeword sequences corresponding to the at least two image identification codes to obtain target data.
Respectively decoding codeword sequences corresponding to the at least two image identification codes to obtain data segments, and combining the obtained data segments to obtain target data; or decoding the code word sequence corresponding to the at least two image identification codes to obtain the target data.
Step 506, analyzing the target data, and extracting at least one instruction from the target data.
The target data can carry instructions and can also carry instructions and contents (such as files of electronic books, advertisements and the like); the electronic device may extract the instruction from the data segment of the target data corresponding to the instruction according to a predetermined encapsulation manner of the target data.
And 507, extracting target content from the target data based on the instruction, and/or extracting target content from the content stored in the electronic equipment.
There are several cases when the target content is extracted based on the instruction:
case 1) extracting instructions from target data only
Executing the instructions to obtain target content from the content stored by the electronic device (or from a network); the content stored in the electronic device may be information maintained by any application in the electronic device, such as a user name (corresponding to the address book application) of the electronic device, and a WeChat account name of a user of the electronic device.
Case 2) extracts the instruction from the target data, executing the instruction to extract the target content from the content carried by the target data.
The target data may carry a plurality of instructions therein, each of which may instruct to extract different content from the target data as target content for operation.
Case 3) extracts the instruction from the target data, executes the instruction to extract the target content from the content carried by the target data and from the stored content of the electronic device.
And combining the content extracted from the target data and the content extracted from the content stored locally in the electronic equipment to serve as the target content, so as to operate the target content according to the instruction of the instruction.
And step 508, utilizing the extracted target content to execute the operation indicated by the instruction.
The operations herein may be rendering content, storing content, or sending content to other electronic devices, as required by the actual business scenario.
In an application scenario, as shown in fig. 5b, when a user 1 holds an electronic device 1 and acquires target data from an electronic device 2 held by the user 2, the user 1 holds the electronic device 1 and adjusts the pose of the electronic device 1, so that a display unit of the electronic device 2 is located within an image acquisition area of the electronic device 1, and the electronic device 1 is triggered to execute the above steps, thereby extracting an image identification code from a video stream and analyzing the target data based on the image identification code; the target data carries instructions and advertisement content, and the electronic device 1 executes the instructions to implement the following operations: the method comprises the steps of extracting advertisement content from target data, presenting the advertisement content in a graphical interface of the electronic equipment, detecting whether an operation that a user triggers to purchase a product is received, and if the operation is received, extracting a link of the product from the target data and triggering a browser application to present a purchase page of the product based on the link.
In an application scenario, as shown in fig. 5c, when the user 1 holds the electronic device 1 and acquires target data from the electronic device 2 held by the user 2, the user 1 holds the electronic device 1 and adjusts the pose of the electronic device 1, so that the display unit of the electronic device 2 is located within the image acquisition area of the electronic device 1, and the electronic device 1 is triggered to execute the above steps, thereby extracting an image identification code from a video stream, and analyzing the target data based on the image identification code; the target data carries instructions and an electronic invitation, and the electronic equipment executes the instructions to realize the following operations: extracting the name of the user of the electronic equipment 1 from the address book application of the electronic equipment 1, extracting the electronic invitation from the target data, adding the name of the user 1 in the new line of the electronic invitation to synthesize the electronic invitation (corresponding target content) corresponding to the user 1, and presenting the electronic invitation on a graphical interface, wherein the new line of the electronic invitation comprises the name of the user 1.
EXAMPLE six
This embodiment describes an electronic apparatus (corresponding to the electronic apparatus 1 of the previous embodiment), as shown in fig. 6a, including:
the acquisition unit 10 is used for acquiring images of a video stream presentation area to obtain at least two frames of images;
the identification unit 20 is configured to identify each frame of image in the at least two frames of images to obtain at least two image identification codes presented by the area;
the first analyzing unit 30 is configured to analyze the at least two image identification codes to obtain a code symbol included in each image identification code, obtain a code word corresponding to the code symbol based on a coding strategy of the code symbol, and combine the code words corresponding to the code symbol to obtain a code word sequence corresponding to each image identification code;
and the decoding unit 40 is configured to decode codeword sequences corresponding to the at least two image identification codes to obtain target data.
As an embodiment, the acquiring unit 10 is further configured to identify each frame of image presented by the video stream presenting area;
the acquiring unit 10 is further configured to trigger the acquiring unit 10 to start image acquisition when it is recognized that the frame image presented in the video stream presenting area includes the start image identification code, until,
and when the frame image presented in the video stream presenting area is recognized to comprise the ending image identification code, triggering the acquisition unit 10 to end image acquisition.
As an embodiment, the identifying unit 20 is further configured to identify a position of an image identifier in a first frame image of the at least two frame images, to obtain an image identifier locating area, where the image identifier locating area identifies an area in the first frame image where an image identifier is distributed;
and identifying a corresponding image identification code in a frame image after the first frame image based on the image identification code locating area.
As an embodiment, the decoding unit 40 includes:
the comparison module is used for comparing the coding symbols in the ith frame image with the coding symbols of the M areas in the (i + j) th frame image;
a determining module, configured to determine a codeword sequence corresponding to the i + j frame image based on the comparison result and the codeword sequence corresponding to the i frame image;
the decoding module is used for decoding codeword sequences corresponding to the first frame image to the Nth frame image to obtain the target data;
wherein i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and i + j is less than or equal to N, N is the number of acquired frame images, and M is an integer greater than or equal to 2.
As an embodiment, when the comparison result indicates that the coding symbol of the kth region of the ith frame image is consistent with the coding symbol of the kth region of the i + j frame image, the determining module is further configured to determine a codeword corresponding to the coding symbol of the kth region of the ith frame image as a codeword corresponding to the coding symbol of the kth region of the i + j frame image;
when the comparison result represents that the coding symbol of the kth region of the ith frame image is inconsistent with the coding symbol of the kth region of the (i + j) th frame image, determining a code word corresponding to the coding symbol of the kth region of the (i + j) th frame image based on a coding strategy of the coding symbol;
combining the obtained code words to obtain a code word sequence corresponding to the i + j frame image; wherein the value of k is more than or equal to 1 and less than or equal to M.
As an embodiment, as shown in fig. 6b, the electronic device further includes:
a second parsing unit 50, configured to parse the target data and extract at least one instruction from the target data;
an extracting unit 60, configured to extract target content from the target data based on the instruction, and/or extract target content from data stored in the electronic device;
a content operating unit 70 for executing the operation indicated by the instruction using the target content extracted by the extracting unit.
In practical application, the acquisition unit 10 may be implemented as a camera in an electronic device; the recognition unit 20, the first parsing unit 30, the decoding unit 40, the second parsing unit 50, the extraction unit 60, and the content manipulation unit 70 may be implemented by a Microprocessor (MCU), a logic programmable gate array (FPGA), or an Application Specific Integrated Circuit (ASIC) in the electronic device.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An information processing method applied to an electronic device, the method comprising:
acquiring images of a video stream presentation area to obtain at least two frames of images;
identifying each frame of image in the at least two frames of images to obtain at least two image identification codes of the presentation area;
analyzing the at least two image identification codes to obtain a coding symbol included by each image identification code, obtaining a code word corresponding to the coding symbol based on a coding strategy of the coding symbol, and combining the code words corresponding to the coding symbols to obtain a code word sequence corresponding to each image identification code;
decoding codeword sequences corresponding to the at least two image identification codes to obtain target data;
the decoding of the codeword sequences corresponding to the at least two image identification codes to obtain target data includes:
comparing the coding symbols in the ith frame image with the coding symbols of the M areas in the (i + j) th frame image;
determining a code word sequence corresponding to the i + j frame image based on the comparison result and the code word sequence corresponding to the i frame image;
determining a codeword sequence corresponding to the i + j frame image based on the comparison result and the codeword sequence corresponding to the i frame image, including:
when the comparison result represents that the coding symbol of the kth region of the ith frame image is consistent with the coding symbol of the kth region of the i + j frame image, determining the code word corresponding to the coding symbol of the kth region of the ith frame image as the code word corresponding to the coding symbol of the kth region of the i + j frame image;
the method further comprises the following steps:
analyzing the target data and extracting at least one instruction from the target data;
extracting target content from the target data based on the instruction, and/or extracting target content from data stored by the electronic equipment;
and executing the operation indicated by the instruction by using the extracted target content.
2. The method of claim 1, wherein the image capturing the video stream presentation area results in at least two frames of images, comprising:
identifying each frame of image presented by the video stream presentation area;
when the frame image presented by the video stream presenting area is identified to comprise the initial image identification code, starting image acquisition until,
and ending image acquisition when the frame image presented by the video stream presenting area is identified to comprise the ending image identification code.
3. The method of claim 1, wherein the identifying each of the at least two frames of images resulting in at least two image identification codes presented in the area comprises:
identifying the position of an image identification code in a first frame image of the at least two frame images to obtain an image identification code positioning area, wherein the image identification code positioning area identifies areas in the first frame image, in which image identification codes are distributed;
and identifying a corresponding image identification code in a frame image after the first frame image based on the image identification code locating area.
4. The method of claim 1, further comprising:
decoding codeword sequences corresponding to the first frame image to the Nth frame image to obtain the target data; wherein the content of the first and second substances,
i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and i + j is less than or equal to N, N is the number of acquired frame images, and M is an integer greater than or equal to 2.
5. The method of claim 1, further comprising:
when the comparison result represents that the coding symbol of the kth region of the ith frame image is inconsistent with the coding symbol of the kth region of the (i + j) th frame image, determining a code word corresponding to the coding symbol of the kth region of the (i + j) th frame image based on a coding strategy of the coding symbol;
combining the obtained code words to obtain a code word sequence corresponding to the i + j frame image; wherein the value of k is more than or equal to 1 and less than or equal to M.
6. An electronic device, characterized in that the electronic device comprises:
the acquisition unit is used for acquiring images of the video stream presentation area to obtain at least two frames of images;
the identification unit is used for identifying each frame of image in the at least two frames of images to obtain at least two image identification codes of the presentation area;
the first analysis unit is used for analyzing the at least two image identification codes to obtain a coding symbol included by each image identification code, obtaining a code word corresponding to the coding symbol based on a coding strategy of the coding symbol, and combining the code words corresponding to the coding symbol to obtain a code word sequence corresponding to each image identification code;
the decoding unit is used for decoding codeword sequences corresponding to the at least two image identification codes to obtain target data;
the decoding unit includes:
the comparison module is used for comparing the coding symbols in the ith frame image with the coding symbols of the M areas in the (i + j) th frame image;
a determining module, configured to determine a codeword sequence corresponding to the i + j frame image based on the comparison result and the codeword sequence corresponding to the i frame image;
the determining module is further configured to determine, when the comparison result indicates that the coding symbol of the kth region of the ith frame image is consistent with the coding symbol of the kth region of the i + j frame image, a code word corresponding to the coding symbol of the kth region of the ith frame image as a code word corresponding to the coding symbol of the kth region of the i + j frame image;
the electronic device further includes:
the second analysis unit is used for analyzing the target data and extracting at least one instruction from the target data;
the extracting unit is used for extracting target content from the target data based on the instruction and/or extracting the target content from the data stored in the electronic equipment;
a content operating unit configured to execute the operation indicated by the instruction using the target content extracted by the extracting unit.
7. The electronic device of claim 6,
the acquisition unit is also used for identifying each frame of image presented by the video stream presentation area;
the acquisition unit is further used for triggering the acquisition unit to start image acquisition when the frame image presented in the video stream presentation area is identified to include the initial image identification code,
and triggering the acquisition unit to finish image acquisition when the frame image presented in the video stream presentation area is identified to include the image finishing identification code.
8. The electronic device of claim 6,
the identification unit is further used for identifying the position of an image identification code in a first frame image of the at least two frame images to obtain an image identification code positioning area, and the image identification code positioning area identifies areas in the first frame image, wherein image identification codes are distributed in the areas;
and identifying a corresponding image identification code in a frame image after the first frame image based on the image identification code locating area.
9. The electronic device of claim 6, wherein the decoding unit further comprises:
the decoding module is used for decoding codeword sequences corresponding to the first frame image to the Nth frame image to obtain the target data;
wherein i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and i + j is less than or equal to N, N is the number of acquired frame images, and M is an integer greater than or equal to 2.
10. The electronic device of claim 9,
the determining module is further configured to determine, when the comparison result indicates that the coding symbol of the kth region of the i-th frame image is inconsistent with the coding symbol of the kth region of the i + j-th frame image, a codeword corresponding to the coding symbol of the kth region of the i + j-th frame image based on a coding strategy of the coding symbol;
combining the obtained code words to obtain a code word sequence corresponding to the i + j frame image; wherein the value of k is more than or equal to 1 and less than or equal to M.
CN201510082875.5A 2015-02-15 2015-02-15 Information processing method and electronic equipment Active CN105991999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510082875.5A CN105991999B (en) 2015-02-15 2015-02-15 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510082875.5A CN105991999B (en) 2015-02-15 2015-02-15 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105991999A CN105991999A (en) 2016-10-05
CN105991999B true CN105991999B (en) 2019-12-24

Family

ID=57042522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510082875.5A Active CN105991999B (en) 2015-02-15 2015-02-15 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105991999B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710929A (en) * 2005-07-11 2005-12-21 北京中星微电子有限公司 Data communication system and method of hand-held apparatus
CN103401759A (en) * 2013-07-17 2013-11-20 吴东辉 Information transferring method based on graphic encryption recognition and application system thereof
CN103544463A (en) * 2013-10-23 2014-01-29 上海动联信息技术股份有限公司 Two-way wireless data communication system and method based on two-dimensional code
CN103581672A (en) * 2012-08-06 2014-02-12 深圳市腾讯计算机系统有限公司 Data transmission method and device
CN103634556A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Information transmission method, information receiving method and electronic apparatus
CN104182712A (en) * 2014-08-22 2014-12-03 联想(北京)有限公司 Information processing method and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100469108C (en) * 2006-11-01 2009-03-11 李博航 Real time video image transmission technology
EP2093697B1 (en) * 2008-02-25 2017-08-23 Telefonaktiebolaget LM Ericsson (publ) Method and arrangement for retrieving information comprised in a barcode
CN102760242B (en) * 2012-05-16 2016-09-14 孟智平 The encoding and decoding of a kind of three-dimension code and using method
CN102831163A (en) * 2012-07-20 2012-12-19 江苏缨思贝睿物联网科技有限公司 Data transfer method and data transfer system
JP2014139732A (en) * 2013-01-21 2014-07-31 Sony Corp Image processing device, image processing method, program and display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1710929A (en) * 2005-07-11 2005-12-21 北京中星微电子有限公司 Data communication system and method of hand-held apparatus
CN103581672A (en) * 2012-08-06 2014-02-12 深圳市腾讯计算机系统有限公司 Data transmission method and device
CN103634556A (en) * 2012-08-27 2014-03-12 联想(北京)有限公司 Information transmission method, information receiving method and electronic apparatus
CN103401759A (en) * 2013-07-17 2013-11-20 吴东辉 Information transferring method based on graphic encryption recognition and application system thereof
CN103544463A (en) * 2013-10-23 2014-01-29 上海动联信息技术股份有限公司 Two-way wireless data communication system and method based on two-dimensional code
CN104182712A (en) * 2014-08-22 2014-12-03 联想(北京)有限公司 Information processing method and electronic equipment

Also Published As

Publication number Publication date
CN105991999A (en) 2016-10-05

Similar Documents

Publication Publication Date Title
US20140152882A1 (en) Mobile device having object-identification interface
US20120151293A1 (en) Sequenced Two-Dimensional Codes in Video
KR102002024B1 (en) Method for processing labeling of object and object management server
CN107092614A (en) Integrated image search terminal, equipment and its method of servicing
CN109194689B (en) Abnormal behavior recognition method, device, server and storage medium
US9633272B2 (en) Real time object scanning using a mobile phone and cloud-based visual search engine
CN110008997B (en) Image texture similarity recognition method, device and computer readable storage medium
CN104980887A (en) Bluetooth connection establishment method and intelligent terminal
CN104038705A (en) Video producing method and device
WO2017113710A1 (en) Method and apparatus for batch processing of photos
US8702001B2 (en) Apparatus and method for acquiring code image in a portable terminal
CN105991999B (en) Information processing method and electronic equipment
CN111638792A (en) AR effect presentation method and device, computer equipment and storage medium
WO2019084718A1 (en) Social group creation method and apparatus and mobile electronic device
CN106874979B (en) Bar code processing, displaying and reading method and device
CN110580423B (en) Personalized configuration method and device of intelligent equipment, electronic equipment and storage medium
CN115866348A (en) Data processing method, device and system based on two-dimensional code
CN114627464A (en) Text recognition method and device, electronic equipment and storage medium
CN104618644B (en) A kind of view data writes the method and terminal of file
KR102264920B1 (en) Image identification apparatus, method thereof and computer readable medium having computer program recorded therefor
CN107872730A (en) The acquisition methods and device of a kind of insertion content in video
EP1251702A3 (en) Video encoding and decoding
CN111832529A (en) Video text conversion method, mobile terminal and computer readable storage medium
CN110955799A (en) Face recommendation movie and television method based on target detection
US9979977B2 (en) Methods and devices of generating and decoding image streams with respective verification data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant