CN108985421B - Method for generating and identifying coded information - Google Patents

Method for generating and identifying coded information Download PDF

Info

Publication number
CN108985421B
CN108985421B CN201810920526.XA CN201810920526A CN108985421B CN 108985421 B CN108985421 B CN 108985421B CN 201810920526 A CN201810920526 A CN 201810920526A CN 108985421 B CN108985421 B CN 108985421B
Authority
CN
China
Prior art keywords
image
position information
coded
positioning
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810920526.XA
Other languages
Chinese (zh)
Other versions
CN108985421A (en
Inventor
梁文昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN201810920526.XA priority Critical patent/CN108985421B/en
Publication of CN108985421A publication Critical patent/CN108985421A/en
Priority to PCT/CN2019/100521 priority patent/WO2020034981A1/en
Application granted granted Critical
Publication of CN108985421B publication Critical patent/CN108985421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a generation method and an identification method of coded information. One embodiment of the method comprises: adding a first preset number of positioning images at preset positions in a target image containing coding information to generate a first coding image; rotating the first coded image by a second preset number of angles by taking the central area of the first coded image as a center to respectively obtain second coded images under the second preset number of different rotation angles; and generating a coded image set according to the obtained second coded image, and outputting the coded image set. The embodiment enriches the generation mode of the coded information. Therefore, the design requirements of different users can be met, and the presentation form of the coding information is enriched.

Description

Method for generating and identifying coded information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method for generating and identifying coded information.
Background
With the rapid development of the mobile internet, it has become more and more common to read contents by a method of scanning information by a mobile terminal. The current popular method is to construct two-dimensional codes. Namely, the mobile terminal device scans the constructed two-dimensional code, so that the information such as the user ID (identity) and the website inside the mobile terminal device can be identified. The existing two-dimensional code is single in form, is usually a fixed square, and needs at least three positioning points.
Disclosure of Invention
The embodiment of the application provides a generation method and an identification method of coding information.
In a first aspect, an embodiment of the present application provides a method for generating encoded information, including: adding a first preset number of positioning images at preset positions in a target image containing coding information to generate a first coding image; rotating the first coded image by a second preset number of angles by taking the central area of the first coded image as a center to respectively obtain second coded images under the second preset number of different rotation angles; and generating a coded image set according to the obtained second coded image, and outputting the coded image set.
In some embodiments, the first preset number is not greater than two, and adding the first preset number of positioning images at the preset position includes: adding a positioning image at the edge position of the target image; or symmetrically or asymmetrically adding two positioning images at the edge position of the target image; or a positioning image is added in the central area and the edge position of the target image respectively.
In some embodiments, generating the set of encoded images from the resulting second encoded image comprises: storing the first coded image and the obtained second coded image according to the sequence of the rotation angle from small to large or from large to small by taking the first coded image as a reference to generate a coded image set; or storing the obtained second coded image according to the sequence of the rotation angle from small to large or from large to small by taking the first coded image as a reference to generate a coded image set.
In some embodiments, outputting the set of encoded images comprises: and displaying the coded images in the coded image set one by one according to a preset frame rate.
In some embodiments, displaying the encoded images in the encoded image set one by one at a preset frame rate includes: and sequentially displaying the coded images in the coded image set at a preset frame rate according to the sequence of the rotation angles from small to large or from large to small.
In a second aspect, an embodiment of the present application provides an apparatus for generating encoded information, including: a first generation unit configured to add a first preset number of positioning images at preset positions in a target image containing encoding information, generating a first encoded image; a second generating unit, configured to rotate the first encoded image by a second preset number of angles with a central region of the first encoded image as a center, to obtain second encoded images at a second preset number of different rotation angles, respectively; and a third generating unit configured to generate a set of encoded images from the obtained second encoded image, and output the set of encoded images.
In some embodiments, the first preset number is not greater than two, and the first generating unit is further configured to: adding a positioning image at the edge position of the target image; or symmetrically or asymmetrically adding two positioning images at the edge position of the target image; or a positioning image is added in the central area and the edge position of the target image respectively.
In some embodiments, the third generating unit comprises: the first generation subunit is configured to store the first coded image and the obtained second coded image in a descending order or descending order of the rotation angle by taking the first coded image as a reference to generate a coded image set; or the second generating subunit is configured to store the obtained second coded images in the order of the rotation angle from small to large or from large to small by taking the first coded image as a reference to generate the coded image set.
In some embodiments, the third generating unit further comprises: and the display subunit is configured to display the coded images in the coded image set one by one according to a preset frame rate.
In some embodiments, the display subunit is further configured to: and sequentially displaying the coded images in the coded image set at a preset frame rate according to the sequence of the rotation angles from small to large or from large to small.
In a third aspect, an embodiment of the present application provides a method for identifying coded information, for identifying a coded image in a set of coded images generated by a method described in any one of the foregoing first aspects, including: acquiring the coded images displayed by each frame, identifying a first preset number of positioning images in the coded images, and determining the position information of the positioning images, wherein the coded images in the coded image set are displayed one by one according to a preset frame rate; obtaining a position information sequence according to the position information of the positioning image determined by each frame; and positioning and identifying the coding information in the coding images in the coding image set according to the position information sequence.
In some embodiments, locating and identifying coding information in a coded picture in a set of coded pictures based on a sequence of position information includes: selecting two coded images from the collected coded images, taking one of the two selected coded images as a pre-transformation image, taking the other one of the two selected coded images as a post-transformation image, and solving an affine transformation matrix; carrying out inverse transformation on the transformed image by using the solved affine transformation matrix to obtain a normalized image; determining the position of the central point of the normalized image according to the position information in the position information sequence; and identifying the coding information in the normalized image according to the position information sequence and the position of the central point.
In some embodiments, identifying the encoded information in the normalized image based on the sequence of location information and the location of the center point comprises: performing polar coordinate transformation on the position information in the position information sequence by taking the central point as a pole point; and identifying the coding information in the normalized image according to the polar coordinates in the position information sequence.
In some embodiments, locating and identifying coding information in a coded picture in a set of coded pictures based on a sequence of position information includes: selecting a coded image from the collected coded images, marking a position indicated by position information in the position information sequence in the selected coded image, and forming a positioning area; and identifying information in the selected code image, which is located in the positioning area.
In some embodiments, obtaining a position information sequence according to the position information of the positioning image determined by each frame includes: determining whether a frame loss situation exists or not; and when the frame loss condition is determined to exist, supplementing the position information of the positioning image in the frame-loss coded image according to the position change rule of the positioning image to obtain a position information sequence.
In some embodiments, before obtaining the position information sequence according to the position information of the positioning image determined by each frame, the method further includes: determining whether the same position information as the position information of the positioning image determined by the current frame exists in the determined position information of the positioning image; in response to determining that there is, ceasing to acquire the encoded image.
In some embodiments, before obtaining the position information sequence according to the position information of the positioning image determined by each frame, the method further includes: determining whether the current acquisition time length reaches a preset time length; in response to determining that the code image is reached, stopping acquiring the code image.
In a fourth aspect, an embodiment of the present application provides an apparatus for identifying encoded information, configured to identify an encoded image in an encoded image set generated by a method as described in any of the foregoing first aspects, including: the acquisition unit is configured to acquire the code images displayed by each frame, identify a first preset number of positioning images in the code images, and determine position information of the positioning images, wherein the code images in the code image set are displayed one by one according to a preset frame rate; the sequence generating unit is configured to obtain a position information sequence according to the position information of the positioning image determined by each frame; and the identification unit is configured to locate and identify the coding information in the coding images in the coding image set according to the position information sequence.
In some embodiments, the identification unit comprises: the solving subunit is configured to select two coded images from the collected coded images, use one of the two selected coded images as a pre-transformation image, use the other coded image of the two selected coded images as a post-transformation image, and solve the affine transformation matrix; a transformation subunit configured to perform inverse transformation on the transformed image by using the solved affine transformation matrix to obtain a normalized image; a center determining subunit configured to determine a position of a center point of the normalized image according to the position information in the position information sequence; and the first identification subunit is configured to identify the coding information in the normalized image according to the position information sequence and the position of the central point.
In some embodiments, the first identification subunit is further configured to: performing polar coordinate transformation on the position information in the position information sequence by taking the central point as a pole point; and identifying the coding information in the normalized image according to the polar coordinates in the position information sequence.
In some embodiments, the identification unit further comprises: a marking subunit configured to select a coded image from the acquired coded images, mark a position indicated by the position information in the position information sequence in the selected coded image, and form a positioning area; and the second identification subunit is configured to identify the information in the selected code image, which is located in the positioning area.
In some embodiments, the sequence generation unit comprises: a determining subunit configured to determine whether there is a collection frame loss condition; and the supplementing subunit is configured to supplement the position information of the positioning image in the lost frame coding image according to the position change rule of the positioning image to obtain a position information sequence when the frame loss condition is determined to exist.
In some embodiments, the method further comprises: a first determination unit configured to determine whether there is the same position information as that of the positioning image determined at the current frame, among the position information of the determined positioning image; in response to determining that there is, ceasing to acquire the encoded image.
In some embodiments, the method further comprises: a second determination unit configured to determine whether the current acquisition duration reaches a preset duration; in response to determining that the code image is reached, stopping acquiring the code image.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments of the first or third aspects above.
In a sixth aspect, this application embodiment provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any of the first or third aspects above.
According to the method for generating and identifying the coding information, the first coding image can be generated by adding the first preset number of positioning images at the preset position in the target image containing the coding information. Then, by taking the central area of the first encoded image as the center, the first encoded image is rotated by a second preset number of angles, so that second encoded images under a second preset number of different rotation angles can be obtained respectively. From the resulting second encoded image, a set of encoded images may then be generated. And may output a set of encoded images. This embodiment enriches the way in which the encoded information is generated. Therefore, different design requirements of users can be met, and the presentation form of the rich coding information is facilitated.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of generating encoded information according to the present application;
FIGS. 3A-3D are schematic structural diagrams of four embodiments of a first encoded image according to the present application;
FIG. 4 is a flow diagram for one embodiment of a method of identifying encoded information according to the present application;
FIG. 5 is a flow diagram of yet another embodiment of a method of identifying encoded information according to the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 illustrates an exemplary system architecture 100 to which the generation method and the identification method of the encoded information of the embodiments of the present application may be applied.
The system architecture 100 may include terminals 101, 102, 103, networks 104, 105, and a server 106. The network 104 may be the medium used to provide communication links between the terminals 101, 102, 103. The network 105 may be the medium used to provide communication links between the terminals 101, 102, 103 and the server 106. The networks 104, 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The users may interact with each other via the network 104 using the terminals 101, 102, 103 to receive or send messages or the like. Meanwhile, the user can also use the terminals 101, 102, 103 to interact with the server 106 through the network 105 to obtain information and the like. The terminals 101, 102, 103 may be installed with various client applications, such as an encoded information generation and recognition application, a web browser, a shopping application, an instant messenger, and the like.
The user may also process a target image including encoded information using an encoded information generation application installed in the terminals 101, 102, and 103. A set of encoded images may thus be generated. In addition, the user can also use the terminals 101, 102, 103 to capture a coded image of the set of coded images. In this way, the terminals 101, 102, 103 can perform analysis processing on each encoded image. And the results of the processing (e.g., the identified encoded information) may be presented to the user.
Here, the terminals 101, 102, and 103 may be hardware or software. When the terminals 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smart phones, tablet computers, wearable devices, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), laptop portable computers, desktop computers, and the like. When the terminals 101, 102, 103 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 106 may be a server providing various services, for example, a background server providing support for various applications installed on the terminals 101, 102, 103. The background server can analyze and process the operation behavior of the user on the application and send corresponding feedback information to the user according to the processing result. As an example, after receiving the encoding information generation instruction sent by the terminal, the background server may perform analysis processing on the target image containing the encoding information. And may feed back the processing results (e.g., the set of encoded images) to the terminal.
Here, the server 106 may be hardware or software. When the server 106 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server 106 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for generating the encoded information provided in the embodiment of the present application is generally executed by the terminals 101, 102, and 103 or the server 106. And the identification method of the encoded information is generally performed by the terminals 101, 102, 103.
It should be understood that the number of terminals, networks and servers in fig. 1 is merely illustrative. There may be any number of terminals, networks, and servers, as desired for an implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of generating encoded information according to the present application is shown. The method for generating the coded information may include the steps of:
step 201, adding a first preset number of positioning images at preset positions in a target image containing coding information, and generating a first coding image.
In the present embodiment, an execution subject of the method for generating the encoded information (e.g., the terminal 101, 102, 103 or the server 106 shown in fig. 1) may add a first preset number of positioning images at preset positions of a target image containing the encoded information, thereby generating a first encoded image. That is, the target image to which the first preset number of positioning images are added may be used as the first encoded image.
Here, the encoded information may be information generated by encoding a character string by using various encoding methods (for example, binary encoding). The character string may be (but is not limited to) text information including at least one character of letters, numbers, symbols, chinese characters, and the like. The content of the string may include, but is not limited to, a web address, an address, business card information, merchandise information, transaction information, a Wi-Fi (WIreless-Fidelity) password, and the like. And the target image may be any image containing encoded information. The target image can be pre-stored locally in the execution subject; or the execution subject obtains from the network resource or other electronic equipment; but also user input into the execution body.
In the present embodiment, the positioning image may be an image for positioning a marker. And the positioning image usually has a different representation from the coded information for easy distinction. For example, the positioning image may be a special shape image, such as a circle, a diamond hexagon, or the like. This can be distinguished from the shape of the encoded image, such as a two-dimensional code or bar code, so that the content of the encoded information is not disturbed. The preset position may be any position in the target image that does not affect the encoded information, such as a corner position of the target image. The preset positions and the first preset number can be set according to actual conditions.
In addition, the manner of adding the positioning image to the target image is not limited in the present application. For example, the positioning image may be used as an image layer directly overlaid on the image at the preset position of the target image. For another example, a positioning image or the like may be drawn at a preset position of the target image.
In some optional implementations of this embodiment, the first preset number may be not greater than two. I.e. the number of added scout images in the target image may not be larger than two. That is, an excessive number of anchor points are not required compared to the conventional two-dimensional code.
For example, the executing entity may add a scout image at an edge position of the target image, thereby generating a first encoded image. As shown in fig. 3A, if the encoded information is located in the square frame region, a ring image (i.e., a positioning image) may be added to a certain corner (e.g., the upper left corner) of the square frame. The square frame may be a square as shown in fig. 3A, or may be a rectangle. In some application scenarios, the square box may be replaced with a circular box, such as taiji code. The positioning image at this time may be (but is not limited to) a circular ring image located on a circular frame; or a line segment such as a curve or a straight line which is positioned in the circular frame and connects the center of circle and the edge.
For another example, the executing entity may add two scout images symmetrically or asymmetrically at edge positions of the target image, thereby generating the first encoded image. If the coded information is located in the square frame region, a solid dot image may be added to any two adjacent corners (e.g., the upper left corner and the upper right corner) of the square frame as shown in fig. 3B. Alternatively, as shown in fig. 3C, a positioning image may be added to each of two opposite corners of the square frame. In fig. 3C, the positioning image may be an image of a circle plus a solid dot. Wherein the circular ring and the solid dots may have the same center.
For another example, the execution subject may add a positioning image to the center region and the edge position of the target image, respectively, to generate the first encoded image. If the encoded information is located between two concentric circular frames, a circular image may be added to the outer circular frame, as shown in fig. 3D. Meanwhile, a solid dot image may be added on the inner circular frame. Or a positioning image can be added at the center of the circle. The central points of the two positioning images can be coaxial or not.
It should be noted that, in general, the main role of the target image is to present the encoded information. The encoded information tends to occupy a large portion of the target image. That is, the edge position of the target image is basically the edge position of the encoded information.
Step 202, taking the central area of the first encoded image as the center, rotating the first encoded image by a second preset number of angles to obtain second encoded images of the second preset number of different rotation angles respectively.
In this embodiment, after the first encoded image is generated, the execution subject may rotate the first encoded image by a second preset number of different angles with a center region of the first encoded image as a center. So that second coded images at a second preset number of different rotation angles can be obtained respectively. The rotation angle may include a rotation direction and a magnitude of the angle. Here, the second preset number and the rotation angle may be set according to actual requirements.
As an example, the execution subject may first rotate the first encoded image by 60 °, 120 °, and 180 °, respectively, clockwise. Thereafter, the execution body may rotate the first encoded image by 60 ° and 120 ° again, respectively, in a counter-clockwise direction. This makes it possible to obtain five second encoded images of equal angular difference. For another example, the executing body may be rotated in such a manner that the angular difference is sequentially increased (or decreased). E.g., in a clockwise (or counterclockwise) direction, the first encoded image is rotated by 60 °, 130 °, 210 °, and 300 °, respectively. The angular differences are 60, 70, 80 and 90 in order. For another example, in order to improve data processing efficiency and simplify the processing procedure, the execution body may rotate the first encoded image clockwise (or counterclockwise) by an equal angular difference.
It is understood that the above-mentioned rotating process can be performed manually by a user, and can also be implemented by executing a corresponding program by the executing main body. For example, the execution subject may perform affine transformation on the first encoded image according to the parameters set by the user, thereby obtaining the first second encoded image. The affine transformation can then be continued with the first second encoded image as the first encoded image. Until the number of second encoded pictures reaches a second preset number. Or until the second coded image obtained currently is the same as the original first coded image, indicating that the second coded image has rotated one revolution at this time. Therefore, manual operation can be reduced, and the data processing efficiency is improved.
And 203, generating a coded image set according to the obtained second coded image, and outputting the coded image set.
In this embodiment, the execution subject may generate the set of encoded images from the second encoded image obtained in step 202. And may output the set of encoded images. The output here may be a storage output. The set of encoded images may be stored locally, for example. Or the set of encoded images may be stored to other electronic devices or to the cloud, etc.
As an example, the execution subject may sequentially store the second encoded images into the generated aggregate file in the order of generation of the second encoded images. So that the collection file can be treated as a collection of encoded images.
Optionally, the execution subject may store the first encoded image and the obtained second encoded image in an order from small to large or from large to small in rotation angle with respect to the first encoded image, so as to generate the encoded image set. That is, the first encoded image and the second encoded image may be included in the set of encoded images. Or the execution subject may store the obtained second encoded images in the order of the rotation angle from small to large or from large to small with the first encoded image as a reference, and generate the encoded image set. That is, the second encoded image may be included in the set of encoded images. The rotation angle here means a rotation angle in the same direction.
In some embodiments, the output may also be a transmission output and/or a display output. For example, the execution body may send the set of encoded images to the terminal. Or the execution body may display the encoded images in the set of encoded images. For example, the execution body may display the respective coded images all on the screen in an arbitrary order or a storage order of the coded images. For another example, the execution subject may display each encoded image in the set of encoded images one by one at a preset frame rate in an arbitrary order or a storage order of the encoded images. That is, each encoded image is displayed one by one at a preset frame rate.
Further, in order to facilitate subsequent identification of the encoded images, the execution subject may sequentially display the encoded images in the encoded image set at a preset frame rate in an order from small to large or from large to small in terms of the rotation angle with reference to the first encoded image. Therefore, the identification efficiency of the coded information is improved, and the waiting time of the user is reduced.
It should be noted that the preset frame rate (i.e. the number of frames displayed per second) is not limited in the present application. For example, it may be set according to the number of coded pictures. Further, the display of the set of encoded images by the subject is performed, and may be only once; or circularly display for a specified number of times (such as three times); but may also be displayed for a specified duration (e.g., two seconds) or continuously cycled.
The method for generating encoded information according to this embodiment may generate the first encoded image by adding the first preset number of positioning images at preset positions in the target image containing the encoded information. Then, by taking the central area of the first encoded image as the center, the first encoded image is rotated by a second preset number of angles, so that second encoded images under a second preset number of different rotation angles can be obtained respectively. From the resulting second encoded image, a set of encoded images may then be generated. And may output a set of encoded images. This embodiment enriches the way in which the encoded information is generated. Therefore, different design requirements of users can be met, and the presentation form of the rich coding information is facilitated.
Referring to fig. 4, a flow 400 of one embodiment of a method for identifying encoded information according to the present application is shown. The identification method of the coded information can be used for identifying the coded images in the coded image set generated by the method described in the above embodiment, and comprises the following steps:
step 401, collecting the encoded images displayed in each frame, identifying a first preset number of positioning images in the encoded images, and determining position information of the positioning images.
In the present embodiment, the execution subject of the identification method of the encoded information (for example, the terminals 101, 102, 103 shown in fig. 1) may capture the encoded image displayed in each frame. Wherein, the coded images in the coded image set can be displayed one by one according to a preset frame rate. For example, when the other electronic devices display the encoded images in the encoded image set one by one at a preset frame rate, the user may capture the encoded image displayed in each frame using the camera on the execution subject. For another example, when the user triggers (e.g., clicks or long-presses) a specific area on the screen of the execution subject, the execution subject may display the encoded images in the encoded image set in the specific area one by one at a preset frame rate. Meanwhile, the execution main body can adopt a screenshot mode and other modes to collect the coded image displayed on each frame on the screen.
In this embodiment, the executing subject may identify a first preset number of positioning images in the captured encoded image. And position information of the positioning image can be determined. For example, the performing subject may perform image recognition analysis on the encoded image to determine whether a first preset number of special shape images (i.e., images that differ in shape from most images) are present therein. If it is determined to exist, the special shape image may be determined as the positioning image.
For another example, the execution subject may perform image recognition analysis on the encoded image to determine whether there is an image therein that matches a pre-stored image. Wherein the storage location of the pre-stored image is not limited. Here, matching may refer to the similarity between image features reaching a threshold (e.g., 90% or 100%). If so, the matched image may be determined to be a positioning image.
As another example, the performing agent may input the encoded image into a pre-trained recognition model. In this way, the execution subject can recognize the positioning image using the recognition model. And position information of the positioning image can be output. The recognition model can be obtained by training the initial model by adopting a large number of positive and negative samples. The positive samples may be the coded image of the sample with the scout image and the sample positions of the scout image. And negative examples may be example coded images without a scout image.
Here, the position information may be (but is not limited to) position information of the positioning image in the encoded image. For example, the position information of the positioning image in the designated area (or the screen shot) may be used. And the location information may be, but is not limited to, coordinates of a center point of the positioning image. It should be noted that the position information of the positioning images in the different encoded images may have the same origin of coordinates. For example, the center point of any corner (e.g., upper left corner) or any edge of the encoded image may be used as the reference origin. This helps to improve the processing efficiency of data.
It will be appreciated that the position of the coded information in the coded images and the position of the positioning image will often change when each coded image in the set of coded images is displayed. While the screen or designated area on which the encoded image is displayed is fixed. That is, the size and shape of the different encoded images is constant.
And 402, obtaining a position information sequence according to the position information of the positioning image determined by each frame.
In this embodiment, the execution subject may obtain a position information sequence according to the position information of the positioning image determined by each frame. As an example, the execution subject may store the position information of the positioning image determined by each frame according to the sequence of the acquisition time to obtain the position information sequence. The position information of each positioning image in the same coding image can form an array. For another example, if the encoded image includes at least two positioning images, the execution main body may store the position information of the at least two positioning images determined by each frame according to the sequence of the acquisition time. For another example, the execution subject may analyze the position information of the positioning image determined for each frame, thereby determining the relative positional relationship. And the position information of the positioning image determined by each frame can be stored according to the sequence of the clockwise (or anticlockwise) direction.
And step 403, positioning and identifying the coding information in the coded images in the coded image set according to the position information sequence.
In this embodiment, the execution subject may locate and identify the coding information in the coded images in the coded image set according to the position information sequence obtained in step 402. As an example, the execution subject may mark a position indicated by position information in the position information sequence in the currently acquired coded image. And a positioning region can be formed according to the position of the mark. For example, straight lines or arcs may be used to connect the various positions of the markers in sequence. And the area located within the connecting line may be used as the positioning area. Or the center point may be determined from the locations of the markers. And determining a positioning area by taking the center point as a circle center and the distance from the center point to the position of any mark as a radius. So that information (i.e., encoded information) located within the localization area in the currently acquired encoded image can be identified.
Alternatively, the executing subject may first select a coded image from the acquired coded images. For example, a coded image may be selected in which the image is relatively sharp. Or may select the first encoded image (possibly the first encoded image) acquired. Then, the position indicated by the position information in the position information sequence may be marked in the selected coded image, and a positioning region may be formed. The information in the selected code image which is located in the locating area can then be identified.
It should be noted that the above-mentioned locating and identifying processes may generally be run in the background in order to reduce the impact on the user's usage. In addition, in order to improve the recognition efficiency and reduce the waiting time of the user, the acquisition process of the coded image is not always performed. In some cases, the performing subject may stop acquiring the encoded image.
Alternatively, in the acquisition process, the execution subject may determine whether there is the same position information as that of the positioning image determined at the current frame, among the position information of the determined positioning image. If it is determined that the same position information exists, the acquisition of the encoded image may be stopped. It is understood that, if at least two positioning images are included in the same encoded image, the same position information mainly refers to the same position information as the position information of the at least two positioning images determined by the current frame.
In some embodiments, the performing subject may time when the acquisition is started. And during the acquisition process, it may be determined whether the current acquisition duration reaches a preset duration (e.g., 5 seconds). If the preset time duration is determined to be reached, the collection of the coded image can be stopped.
According to the identification method of the coding information provided by the embodiment, the first preset number of positioning images in the coding images can be identified and the position information of the positioning images can be determined by collecting the coding images displayed by each frame. The encoding images in the encoding image set can be displayed one by one according to a preset frame rate. Then, a position information sequence can be obtained according to the position information of the positioning image determined by each frame. Finally, based on the sequence of position information, the coding information in the coded images in the set of coded images can be located and identified. This implementation may enable identification of the encoded images in the set of encoded images generated by the method described in the above embodiments. And enriches the identification mode of the coded information. This may help to meet the use needs of different users.
With further reference to fig. 5, a flow 500 of yet another embodiment of an identification method of encoded information according to the present application is shown. The identification method of the coded information can comprise the following steps:
step 501, collecting the encoded images displayed in each frame, identifying a first preset number of positioning images in the encoded images, and determining position information of the positioning images.
In the present embodiment, the execution subject of the identification method of the encoded information (for example, the terminals 101, 102, 103 shown in fig. 1) may capture the encoded image displayed in each frame. The encoding images in the encoding image set can be displayed one by one according to a preset frame rate. So that a first preset number of positioning images in the encoded image can be identified and position information of the positioning images can be determined. Referring to the related description in step 401 of the embodiment in fig. 4, the description is omitted here.
Step 502, determining whether there is a frame loss situation.
In this embodiment, the execution subject may determine whether there is a capture frame loss condition according to the interval duration between two adjacent captured encoded images. For example, according to the sequence of the acquisition time, the acquisition intervals between the first coded image and the second coded image and between the second coded image and the third coded image are all 0.2 second. And the acquisition interval between the third encoded image and the fourth encoded image is 0.4 seconds. At this time, the execution body may determine that there is a frame loss situation between the third and fourth sheets.
As an example, the execution subject may determine a position change rule of the positioning image according to the position information of the positioning image determined by each frame. Therefore, according to the position change rule, the execution main body can determine whether the frame loss condition exists or not. For example, the distance between any two adjacent positions is L. In this case, if the distance between two adjacent positions is not L, it can be said that the position information of the positioning image is missing between the two positions. I.e. there is an acquisition frame loss situation. That is, no coded image is acquired that can be used to describe the missing positional information of the scout image.
In this embodiment, if the execution subject determines that there is no frame loss, a position information sequence may be obtained according to the position information of the positioning image determined by each frame. Reference may be made to the related description in step 402 of the embodiment in fig. 4, which is not described herein again. If the executing agent determines that there is a frame loss, then step 503 may be continued.
Step 503, when it is determined that there is a frame loss, according to the position change rule of the positioning image, supplementing the position information of the positioning image in the frame-loss encoded image, and obtaining a position information sequence.
In this embodiment, if it is determined that there is a frame loss in acquisition, the execution main body may supplement the position information of the positioning image in the frame-loss encoded image according to the position change rule of the positioning image. For example, the distance between any two adjacent positions is L. In this case, if the distance between two adjacent positions is not L, such a position can be determined between the two positions. The distance between the position and the two positions is L and accords with the position change rule. I.e. the position information of the supplementary scout image. In this way, a position information sequence can be obtained from each position information of the supplemented positioning image. Reference may be made to the related description in step 402 of the embodiment in fig. 4, which is not described herein again.
And 504, selecting two coded images from the collected coded images, taking one of the two selected coded images as a pre-transformation image, taking the other of the two selected coded images as a post-transformation image, and solving an affine transformation matrix.
In this embodiment, the execution subject may select two code images from the acquired code images. One of the two selected encoded images may then be used as the pre-transform image. And the other of the two selected coded images can be used as a transformed image. After that, the solution of the affine transformation matrix may be performed.
The manner of selection is not limited in this application. For example, the execution subject may select any two adjacent coded images in the acquisition time order. Or the performing subject may select the first encoded image acquired (possibly the first encoded image) and select any encoded image acquired subsequently. Such as selecting a clear encoded image of the captured image.
Wherein an affine transformation is geometrically defined as an affine transformation or affine mapping between two vector spaces. Usually consisting of a non-singular linear transformation (a transformation performed using a linear function) followed by a translation transformation. That is, one vector space can be transformed to another vector space by performing a linear transformation and then a translation. In the case of finite dimensions, each affine transformation can be given by a matrix a and a vector b, which can be written as a and an additional column b.
Here, the execution subject may adopt various existing methods to complete the solution of the affine transformation matrix. As an example, a least squares method may be employed to estimate the transformation matrix. If the position information of several same pixel points in two coded images can be taken, the position information is the position information before and after transformation. Then, a system of super-linear equations may be established. Therefore, the coordinate transformation coefficient (namely, the parameter of the affine transformation matrix) can be solved through the least square method. It is understood that, in order to increase the multiplexing rate of data, the pixel points may include positioning images. Thus, the treatment process can be simplified, and the treatment efficiency can be improved.
For another example, the execution body may also use the ransac algorithm to solve the affine transformation matrix. The random (random Sample consensus) is an algorithm that usually calculates mathematical model parameters of data according to a set of Sample data sets containing abnormal data to obtain valid Sample data. This algorithm is often used in computer vision. For example, the matching point problem of a pair of cameras and the calculation of a fundamental matrix are simultaneously solved in the field of stereoscopic vision. The solution method of the affine transformation matrix is commercially available and will not be described in detail herein.
And 505, performing inverse transformation on the transformed image by using the solved affine transformation matrix to obtain a normalized image.
In this embodiment, the execution subject may perform inverse transformation on the transformed image using the solved affine transformation matrix, so that a normalized image may be obtained. That is, of the two selected encoded images, the encoded image that is the post-transform image is transformed into the encoded image that is the pre-transform image.
Step 506, determining the position of the central point of the normalized image according to the position information in the position information sequence.
In this embodiment, the execution subject can determine the center point of the positions according to the position information in the position information sequence. It can be understood that the second encoded image is obtained by rotating around the central region of the first encoded image. The center point determined here is the center point of the normalized image.
And step 507, identifying the coding information in the normalized image according to the position information sequence and the position of the central point.
In this embodiment, the execution subject may identify the encoding information in the normalized image from the position information in the position information sequence and the position of the center point. Reference may be made to the related description in step 403 in the embodiment of fig. 4, and details are not repeated here.
Alternatively, the execution subject may perform polar transformation on the position information in the position information sequence with the center point determined in step 506 as the pole. That is, the origin of coordinates of each piece of position information in the position information sequence is transformed into the center point of the normalized image. At this time, the execution body may recognize the coded information in the normalized image from the polar coordinates of each piece of positional information in the positional information sequence.
The method for identifying the coding information provided by the embodiment adds a process for determining the frame loss condition. Meanwhile, a process of identifying the coding information according to the position information sequence is described in detail. This enriches and improves the identification process. The accuracy of the recognition result is improved.
Referring now to FIG. 6, a block diagram of a computer system 600 suitable for implementing an electronic device (e.g., the terminal 101, 102, 103 or the server 106 shown in FIG. 1) according to embodiments of the present application is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a touch screen, buttons, a mouse, a microphone, a camera, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first generation unit, a second generation unit, and a third generation unit. As another example, it can also be described as: a processor includes an acquisition unit, a sequence generation unit, and an identification unit. Where the names of the cells do not in some cases constitute a limitation of the cells themselves, for example, the first generating unit may also be described as a "unit that adds a first preset number of positioning images at preset positions in a target image containing encoding information, generating a first encoded image".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. For example, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: adding a first preset number of positioning images at preset positions in a target image containing coding information to generate a first coding image; rotating the first coded image by a second preset number of angles by taking the central area of the first coded image as a center to respectively obtain second coded images under the second preset number of different rotation angles; and generating a coded image set according to the obtained second coded image, and outputting the coded image set.
For another example, the one or more programs, when executed by the electronic device, may further cause the electronic device to: acquiring the coded images displayed by each frame, identifying a first preset number of positioning images in the coded images, and determining the position information of the positioning images, wherein the coded images in the coded image set are displayed one by one according to a preset frame rate; obtaining a position information sequence according to the position information of the positioning image determined by each frame; and positioning and identifying the coding information in the coding images in the coding image set according to the position information sequence.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (20)

1. A method of generating encoded information, comprising:
adding a first preset number of positioning images at preset positions in a target image containing coding information to generate a first coding image;
rotating the first coded image by a second preset number of angles by taking the central area of the first coded image as a center to respectively obtain second coded images under the second preset number of different rotation angles;
and generating a coded image set according to the obtained second coded image, and outputting the coded image set, wherein the position information of the positioning image of each coded image in the coded image set is used for obtaining a position information sequence.
2. The method of claim 1, wherein the first preset number is not greater than two, and the adding a first preset number of positioning images at preset positions comprises:
adding a positioning image at the edge position of the target image; or symmetrically or asymmetrically adding two positioning images at the edge position of the target image; or adding a positioning image in the central area and the edge position of the target image respectively.
3. The method of claim 1, wherein generating a set of encoded images from the resulting second encoded image comprises:
storing the first coded image and the obtained second coded image according to the sequence of the rotation angle from small to large or from large to small by taking the first coded image as a reference to generate a coded image set; or
And storing the obtained second coded image according to the sequence of the rotation angle from small to large or from large to small by taking the first coded image as a reference to generate a coded image set.
4. The method of any of claims 1-3, wherein the outputting the set of encoded images comprises:
and displaying the coded images in the coded image set one by one according to a preset frame rate.
5. The method of claim 4, wherein the displaying the encoded images of the encoded image set one by one at a preset frame rate comprises:
and sequentially displaying the coded images in the coded image set at a preset frame rate according to the sequence of the rotation angles from small to large or from large to small.
6. A method of identifying coded information for identifying a coded picture of a set of coded pictures generated by the method of any one of claims 1 to 5, comprising:
acquiring the coded images displayed by each frame, identifying a first preset number of positioning images in the coded images, and determining the position information of the positioning images, wherein the coded images in the coded image set are displayed one by one according to a preset frame rate;
obtaining a position information sequence according to the position information of the positioning image determined by each frame;
and selecting a coded image from the coded images displayed by the frames, marking the position indicated by the position information in the position information sequence in the selected coded image, forming a positioning area, and identifying the information in the positioning area in the selected coded image.
7. The method according to any one of claims 6, wherein obtaining a position information sequence according to the position information of the positioning image determined by each frame comprises:
determining whether a frame loss situation exists or not; and when the frame loss condition is determined to exist, supplementing the position information of the positioning image in the frame-loss coded image according to the position change rule of the positioning image to obtain a position information sequence.
8. The method according to any one of claims 6, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the same position information as the position information of the positioning image determined by the current frame exists in the determined position information of the positioning image; in response to determining that there is, ceasing to acquire the encoded image.
9. The method according to claim 7, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the same position information as the position information of the positioning image determined by the current frame exists in the determined position information of the positioning image; in response to determining that there is, ceasing to acquire the encoded image.
10. The method according to any one of claims 6, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the current acquisition time length reaches a preset time length; in response to determining that the code image is reached, stopping acquiring the code image.
11. The method according to claim 7, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the current acquisition time length reaches a preset time length; in response to determining that the code image is reached, stopping acquiring the code image.
12. A method of identifying coded information for identifying a coded picture of a set of coded pictures generated by the method of any one of claims 1 to 5, comprising:
acquiring the coded images displayed by each frame, identifying a first preset number of positioning images in the coded images, and determining the position information of the positioning images, wherein the coded images in the coded image set are displayed one by one according to a preset frame rate;
obtaining a position information sequence according to the position information of the positioning image determined by each frame;
selecting two coded images from the collected coded images, taking one of the two selected coded images as a pre-transformation image, taking the other one of the two selected coded images as a post-transformation image, and solving an affine transformation matrix;
carrying out inverse transformation on the transformed image by using the solved affine transformation matrix to obtain a normalized image;
determining the position of the central point of the normalized image according to the position information in the position information sequence;
identifying coding information in a target area in the normalized image, wherein the target area is determined by adopting a positioning area and the position of the central point, and the positioning area is formed based on the position indicated by the position information.
13. The method of claim 12, wherein the identifying encoded information within a target region in the normalized image comprises:
performing polar coordinate transformation on the position information in the position information sequence by taking the central point as a pole point;
and identifying the coding information in the normalized image according to the polar coordinates in the position information sequence.
14. The method according to any one of claims 12 to 13, wherein obtaining a position information sequence according to the position information of the positioning image determined by each frame comprises:
determining whether a frame loss situation exists or not; and when the frame loss condition is determined to exist, supplementing the position information of the positioning image in the frame-loss coded image according to the position change rule of the positioning image to obtain a position information sequence.
15. The method according to any one of claims 12-13, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the same position information as the position information of the positioning image determined by the current frame exists in the determined position information of the positioning image; in response to determining that there is, ceasing to acquire the encoded image.
16. The method according to claim 14, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the same position information as the position information of the positioning image determined by the current frame exists in the determined position information of the positioning image; in response to determining that there is, ceasing to acquire the encoded image.
17. The method according to any one of claims 12-13, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the current acquisition time length reaches a preset time length; in response to determining that the code image is reached, stopping acquiring the code image.
18. The method according to claim 14, wherein before obtaining the position information sequence based on the position information of the positioning image determined by each frame, the method further comprises:
determining whether the current acquisition time length reaches a preset time length; in response to determining that the code image is reached, stopping acquiring the code image.
19. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-18.
20. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-18.
CN201810920526.XA 2018-08-14 2018-08-14 Method for generating and identifying coded information Active CN108985421B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810920526.XA CN108985421B (en) 2018-08-14 2018-08-14 Method for generating and identifying coded information
PCT/CN2019/100521 WO2020034981A1 (en) 2018-08-14 2019-08-14 Method for generating encoded information and method for recognizing encoded information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810920526.XA CN108985421B (en) 2018-08-14 2018-08-14 Method for generating and identifying coded information

Publications (2)

Publication Number Publication Date
CN108985421A CN108985421A (en) 2018-12-11
CN108985421B true CN108985421B (en) 2021-05-07

Family

ID=64552969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810920526.XA Active CN108985421B (en) 2018-08-14 2018-08-14 Method for generating and identifying coded information

Country Status (2)

Country Link
CN (1) CN108985421B (en)
WO (1) WO2020034981A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985421B (en) * 2018-08-14 2021-05-07 上海掌门科技有限公司 Method for generating and identifying coded information
CN112533014B (en) * 2020-11-26 2023-06-09 Oppo广东移动通信有限公司 Method, device and equipment for processing and displaying target object information in live video broadcast
CN114169353B (en) * 2021-12-08 2023-07-04 福建正孚软件有限公司 Microcode decryption method and microcode decryption system
CN114202047B (en) * 2021-12-14 2023-07-07 福建正孚软件有限公司 Microscopic code-based traceability application method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090128751A (en) * 2008-06-11 2009-12-16 박문수 Rotary auto code
CN106529635A (en) * 2016-10-18 2017-03-22 网易(杭州)网络有限公司 Coding pattern generating and identifying method and apparatus
CN106570549A (en) * 2016-10-28 2017-04-19 网易(杭州)网络有限公司 Coding pattern generation and identification methods and coding pattern generation and identification devices
CN110390375A (en) * 2018-04-17 2019-10-29 银河联动信息技术(北京)有限公司 Synthesis system and method, the dynamic two-dimension code display device of dynamic two-dimension code and mark

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE523700C2 (en) * 2001-06-26 2004-05-11 Anoto Ab Method, device and memory medium for position determination
KR100746641B1 (en) * 2005-11-11 2007-08-06 주식회사 칼라짚미디어 Image code based on moving picture, apparatus for generating/decoding image code based on moving picture and method therefor
US7978363B2 (en) * 2006-02-15 2011-07-12 Seiko Epson Corporation Printing apparatus and printing method
WO2013027234A1 (en) * 2011-08-22 2013-02-28 Zak株式会社 Satellite dot type two-dimensional code and method for reading same
US9424504B2 (en) * 2014-09-15 2016-08-23 Paypal, Inc. Combining a QR code and an image
CN106570546B (en) * 2016-10-18 2019-05-07 网易(杭州)网络有限公司 A kind of generation of coding pattern, recognition methods and device
CN106548499B (en) * 2016-10-27 2020-06-16 网易(杭州)网络有限公司 Method and device for generating and identifying coding pattern
CN108985421B (en) * 2018-08-14 2021-05-07 上海掌门科技有限公司 Method for generating and identifying coded information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090128751A (en) * 2008-06-11 2009-12-16 박문수 Rotary auto code
CN106529635A (en) * 2016-10-18 2017-03-22 网易(杭州)网络有限公司 Coding pattern generating and identifying method and apparatus
CN106570549A (en) * 2016-10-28 2017-04-19 网易(杭州)网络有限公司 Coding pattern generation and identification methods and coding pattern generation and identification devices
CN110390375A (en) * 2018-04-17 2019-10-29 银河联动信息技术(北京)有限公司 Synthesis system and method, the dynamic two-dimension code display device of dynamic two-dimension code and mark

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Location dependent and rotated group space-time block codes;C.M. Yetis;《IEEE》;20041025;全文 *
基于Java的二维码识别系统;郭一;《电子世界》;20180808;第2018卷(第15期);第164-166页 *

Also Published As

Publication number Publication date
WO2020034981A1 (en) 2020-02-20
CN108985421A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108985421B (en) Method for generating and identifying coded information
CN107633218B (en) Method and apparatus for generating image
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
CN109993150B (en) Method and device for identifying age
US20140210857A1 (en) Realization method and device for two-dimensional code augmented reality
US10970938B2 (en) Method and apparatus for generating 3D information
CN110517214B (en) Method and apparatus for generating image
CN112073748B (en) Panoramic video processing method and device and storage medium
US10789474B2 (en) System, method and apparatus for displaying information
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
CN109583389B (en) Drawing recognition method and device
CN108763532A (en) For pushed information, show the method and apparatus of information
US10614621B2 (en) Method and apparatus for presenting information
WO2014114118A1 (en) Realization method and device for two-dimensional code augmented reality
CN108882025B (en) Video frame processing method and device
CN110555876B (en) Method and apparatus for determining position
CN109754464B (en) Method and apparatus for generating information
CN111862205A (en) Visual positioning method, device, equipment and storage medium
CN110059624B (en) Method and apparatus for detecting living body
CN111783662B (en) Attitude estimation method, estimation model training method, device, medium and equipment
CN108597034B (en) Method and apparatus for generating information
CN113590878A (en) Method and device for planning path on video picture and terminal equipment
Bergig et al. In-place augmented reality
CN108921138B (en) Method and apparatus for generating information
CN109522429B (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant