CN111753578A - Identification method of optical communication device and corresponding electronic equipment - Google Patents

Identification method of optical communication device and corresponding electronic equipment Download PDF

Info

Publication number
CN111753578A
CN111753578A CN201910237504.8A CN201910237504A CN111753578A CN 111753578 A CN111753578 A CN 111753578A CN 201910237504 A CN201910237504 A CN 201910237504A CN 111753578 A CN111753578 A CN 111753578A
Authority
CN
China
Prior art keywords
information
identification
optical
optical communication
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910237504.8A
Other languages
Chinese (zh)
Inventor
方俊
方晓晨
牛旭恒
李江亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Whyhow Information Technology Co Ltd
Original Assignee
Beijing Whyhow Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Whyhow Information Technology Co Ltd filed Critical Beijing Whyhow Information Technology Co Ltd
Priority to CN201910237504.8A priority Critical patent/CN111753578A/en
Publication of CN111753578A publication Critical patent/CN111753578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

Provided are an identification method of an optical communication device and a corresponding electronic apparatus, the method including: receiving an environmental image to be identified containing the optical communication device, which is shot by an identification device; inputting the environmental image to be recognized into a classification model to obtain a classification output result, wherein the classification model is obtained using pre-stored training data, the training data including identification information of a plurality of optical communication devices and a reference environmental image, and wherein the classification output result is identification information of the optical communication devices; and transmitting information about the classification output result to the recognition device.

Description

Identification method of optical communication device and corresponding electronic equipment
Technical Field
The present invention relates to the field of optical information technologies, and in particular, to an identification method for an optical communication device and a corresponding electronic device.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Optical communication devices are also referred to as optical labels, and these two terms are used interchangeably herein. The optical label can transmit information by emitting different lights, has the advantages of long identification distance, loose requirements on visible light conditions and strong directivity, and the information transmitted by the optical label can change along with time, thereby providing large information capacity and flexible configuration capability. Compared with the traditional two-dimensional code, the optical label has longer identification distance and stronger information interaction capacity, thereby providing great convenience for users and merchants.
An optical label may typically include a controller and at least one light source, the controller may drive the light source through different driving modes to communicate different information to the outside. In order to provide corresponding services to the user and the merchant based on the optical labels, each optical label may be assigned an identification Information (ID) for uniquely identifying or identifying the optical label by the manufacturer, manager, user, or the like of the optical label. In general, the light source may be driven by a controller in the optical tag to transmit the identification information outwards, and the user may use the optical tag recognition device to perform continuous image acquisition on the optical tag to obtain the identification information transmitted by the optical tag, so that the corresponding service may be accessed based on the identification information, for example, accessing a web page associated with the identification information of the optical tag, obtaining other information associated with the identification information (e.g., location information of the optical tag corresponding to the identification information), and so on.
Fig. 1 shows an exemplary optical label 100 comprising three light sources (first light source 101, second light source 102, third light source 103, respectively). Optical label 100 further comprises a controller (not shown in fig. 1) for selecting a respective driving mode for each light source in dependence on the information to be communicated. For example, in different drive modes, the controller may control the turning on and off of the light source using drive signals having different frequencies, such that when the photo label 100 is photographed in a low exposure mode using a rolling shutter imaging device (e.g., a CMOS imaging device), the image of the light source therein may exhibit different stripes. Fig. 2 shows an image of the optical label 100 taken by a rolling shutter imaging device in a low exposure mode when the optical label 100 is communicating information, wherein the image of the first light source 101 exhibits relatively narrow stripes and the images of the second light source 102 and the third light source 103 exhibit relatively wide stripes. By analyzing the imaging of the light sources in the optical label 100, the driving pattern of each light source at the moment can be analyzed, so that the information transmitted by the optical label 100 at the moment can be analyzed.
The optical tag identification device may be, for example, a mobile device carried by a user (e.g., a cell phone with a camera, a tablet computer, smart glasses, a smart watch, etc.), or may be a machine capable of autonomous movement (e.g., a drone, an unmanned automobile, a robot, etc.). In many cases, the identification device needs to acquire multiple images of the optical label by continuously capturing images of the optical label through a camera thereon in a specific shooting mode (such as the above-mentioned low exposure mode), and analyze the images through a built-in application program to identify information conveyed by the optical label. Due to the large differences in hardware and software configurations, etc., of these devices, some devices may not be able to recognize the optical tags due to their own hardware or software limitations (e.g., resolution limitations, zoom capability limitations, exposure mode limitations, frame rate limitations, etc.). In addition, even for devices capable of recognizing optical labels, efficient recognition of the optical labels may be temporarily unavailable due to adverse environmental conditions (e.g., too far distance, too strong ambient light, etc.) in some cases.
In order to solve the above problems, the present invention provides an identification method of an optical communication apparatus.
Disclosure of Invention
One aspect of the present invention provides an identification method of an optical communication apparatus, including: receiving an environmental image to be identified containing the optical communication device, which is shot by an identification device; inputting the environmental image to be recognized into a classification model to obtain a classification output result, wherein the classification model is obtained using pre-stored training data, the training data including identification information of a plurality of optical communication devices and a reference environmental image, and wherein the classification output result is identification information of the optical communication devices; and transmitting information about the classification output result to the recognition device.
Optionally, wherein one or more classification models are obtained using the training data.
Optionally, wherein each class in each classification model corresponds to one optical communication device.
Optionally, the training data further includes location information of the plurality of optical communication devices.
Optionally, wherein the training data is divided into a plurality of sets based on the location information, the optical communication devices in each set having proximate geographic locations, and a classification model is trained for each set, the classification model having associated location information.
Optionally, the identification method further includes: receiving information relating to the location of the optical communication device from the identification apparatus; and selecting a classification model to be input by the environmental image to be recognized from a plurality of classification models based on the information relating to the position of the optical communication apparatus and the associated position information of each of the plurality of classification models.
Optionally, the training data further includes shooting time information of the reference environment image.
Optionally, wherein the training data is divided into a plurality of sets based on the capturing time information of the reference environment image, and a classification model is trained for each set, the classification model having associated capturing time information.
Optionally, the identification method further includes: selecting a classification model to be input by the environmental image to be recognized from a plurality of classification models based on information about a photographing time of the environmental image to be recognized and associated photographing time information of each of the plurality of classification models.
Optionally, wherein the pre-stored training data is obtained by: receiving identification information of an optical communication device and an environment image containing the optical communication device from an identification device with identification capability of the optical communication device; and storing the environment image as a reference environment image in association with the identification information.
Optionally, the identification method further includes: receiving information about the position of the optical communication device and/or information about the capturing time of the reference environment image from a recognition apparatus having an optical communication device recognition capability.
Another aspect of the invention relates to a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, is operative to carry out the method as described above.
Another aspect of the invention relates to an electronic device comprising a processor and a memory, in which a computer program is stored which, when being executed by the processor, is operative to carry out the method as described above.
The invention provides a method for identifying an optical communication device, which does not need to identify information transmitted by a light source in the optical communication device, but can identify the optical communication device through an environmental image around the optical communication device.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 illustrates an exemplary optical label;
FIG. 2 shows an image of a photo label taken by a rolling shutter imaging device in a low exposure mode;
FIG. 3 illustrates an exemplary optical label network;
FIG. 4 illustrates a recognition method according to one embodiment;
FIG. 5 illustrates an identification method according to another embodiment;
FIG. 6 illustrates an identification method according to yet another embodiment;
FIG. 7 illustrates operations performed when an optical label is identified by an identification device with optical label identification capabilities, in accordance with one embodiment;
FIG. 8 shows 6 optical tag IDs stored in a database and 6 reference environment images containing the 6 optical tags, respectively; and
fig. 9 shows 6 environment images taken by an identification device without optical label identification capability, and results obtained after classifying the environment images using a classification model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Each optical label can be used to transmit information to the outside, and in reality, a large number of optical labels can be constructed into an optical label network. FIG. 3 illustrates an exemplary optical label network that includes a plurality of optical labels and at least one server, wherein information associated with each optical label may be stored on the server. For example, identification Information (ID), location information, or any other information of each optical label, such as service information associated with the optical label, description information or attributes associated with the optical label, such as physical size, physical shape, orientation, etc. of the optical label, may be maintained on the server. The identification device may use the identification information of the identified optical label to obtain further information related to the optical label from a server query. The position information of the optical label may refer to an actual position of the optical label in the physical world, which may be indicated by geographical coordinate information. A server may be a software program running on a computing device, or a cluster of computing devices. The optical label may be offline, i.e., the optical label does not need to communicate with the server. Of course, it will be appreciated that an online optical tag capable of communicating with a server is also possible.
The optical label may be scanned by an identification device (e.g., a cell phone) to identify information conveyed by the light source therein. However, as described in the background, identification devices in the real world are very diverse, and it has been found in practice that some identification devices may not support such identification due to limitations in their hardware or software conditions, or that the identification device temporarily fails to identify information conveyed by the optical tag due to adverse environmental conditions, which is highly disadvantageous. In this document, an identification device capable of identifying information transmitted by a light source in an optical label is referred to as an "identification device having an optical label identification capability", and conversely, an "identification device without an optical label identification capability".
One embodiment of the present invention provides a method for identifying an optical label based on an image of an environment around the optical label, which does not need to analyze information transmitted by a light source in the optical label through different light emitting modes, but uses a classification model based on the image of the environment of the optical label to realize the identification of the optical label.
The server may store in advance identification information of a plurality of optical labels and a reference environment image containing each of the optical labels in a database. A table structure in a database according to one embodiment may include an optical tag ID field and a reference environment image field, each optical tag ID may correspond to one or more reference environment images. An example table structure is as follows:
reference ambient image list of light tag ID1 light tag ID1
Reference ambient image list of light tag ID2 light tag ID2
Reference ambient image list of light tag ID3 light tag ID3
……
The server may obtain the classification model by training the optical label IDs and the reference environment images stored in the database using a machine learning method, where a training input is one or more reference environment images corresponding to each optical label, and a training output is optical label ID information. The machine learning method used for training the classification model may be various existing methods, for example, one training model that may be employed is a deep convolutional neural network model CNN. In one embodiment, each class in the classification model may correspond to an optical label, i.e., each class corresponds to an optical label ID.
In one embodiment, when an identification device without optical tag identification capability attempts to identify an optical tag, or when an identification device with optical tag identification capability temporarily fails to identify an optical tag in a conventional manner due to adverse environmental conditions, the method shown in fig. 4 may be performed for identification as follows:
step 401: the identification device takes an image of the environment to be identified containing the optical label.
The environment image to be recognized contains imaging information of the environment around the optical label, which may be an image captured in the normal shooting mode, or other images, such as a grayscale image or a single-channel image, as long as the imaging information of the environment around the optical label can be represented. In one embodiment, an image of the environment to be identified containing the optical label is captured after the identification device detects the presence of the optical label or after a user of the identification device determines the presence of the optical label. The identification device may detect the presence of the optical label based on characteristics of the optical label (e.g., specific structural information, geometric characteristic information, lighting pattern information, etc.).
Step 402: the identification device sends the environmental image to be identified to the server.
Step 403: the server inputs the environmental image to be recognized into a classification model to obtain a classification output result, wherein the classification model is obtained by training through the pre-stored identification information of the optical label and a reference environmental image containing the optical label.
The classification output of the classification model may be, for example, an optical tag ID that is used to identify the optical tag. The classification output of the classification model may be one or more optical tag IDs, and each classification output may have an associated probability of correctness provided by the classification model.
Step 404: the server sends information about the classification output result to the identification device. For example, the server may send information about one or more optical labels indicated by the classification output of the classification model (e.g., correctness probability information associated with each optical label, identification information of the optical label, owner information of the optical label, network service address information associated with the optical label, etc.) to the identification device. Alternatively, the server may select one or more optical labels from the one or more optical labels indicated by the classification output result of the classification model, and transmit information about the selected optical labels to the identification device. In one embodiment, the optical label with the highest probability of correctness may be selected from the one or more optical labels indicated by the classification output result. In one embodiment, an optical label with a probability of correctness that satisfies a predetermined condition (e.g., the probability of correctness is greater than a certain threshold) may be selected from the one or more optical labels indicated by the classification output result. In one embodiment, the plurality of optical labels with the highest probability of correctness may be selected from the one or more optical labels indicated by the classification output result. The user of the identification device may select from the list of optical labels provided by the server the optical label that he or she believes to be appropriate, depending on other information (e.g. the name of the shop to which the optical label belongs, etc.). For example, if the server sends the relevant information of the determined three optical labels to the identification device, the relevant information indicates that the owner of the first optical label is store a, the owner of the second optical label is store B, and the owner of the third optical label is store C. At this point, if the user of the identification device knows that the optical signature he or she currently wants to identify is that of store B, he or she may select the second optical signature to interact with.
There are many different strategies for training a classification model. In one embodiment, a classification model may be derived for information about all optical labels stored in the database, where each class may correspond to an optical label. In another embodiment, a plurality of classification models may be derived for information about all optical labels stored in the database, where each class in each classification model may correspond to one optical label. For example, as mentioned above, the location information of each optical label may be stored in the server in advance. The position information of the optical label may be any information capable of indicating the position of the optical label, such as GPS information of the optical label, information of a city, information of a building, information of a street, altitude information, floor information, and the like. The server may use the location information of the optical labels stored therein to divide the optical labels into sets (e.g., optical labels within a mall or optical labels on a street), with some geographically close optical labels in each set. One classification model may be trained for a set of geographically close optical labels, such that multiple classification models are derived for all optical labels. Each classification model may have associated location information (e.g., GPS information), which may be, for example, the average of the location information of all the optical labels involved in the classification model, or the location information of the center of the area determined by the geographic locations of all the optical labels involved in the classification model. By dividing the optical labels into a plurality of sets based on the position information of the optical labels and generating a classification model for each set, the complexity of the generated classification model can be reduced, which is helpful for improving the classification efficiency and accuracy of the classification model, and is particularly suitable for large application scenarios with a large number of optical labels.
In one embodiment, where the server trains a classification model for a set of geographically close light labels, and each classification model has associated location information, the method shown in fig. 5 may be performed for identification as follows:
step 501: the identification device takes an image of the environment to be identified containing the optical label.
Step 502: the identification device sends information about the position of the optical label and the image of the environment to be identified to a server.
The identification device may additionally acquire auxiliary information when capturing an image of the environment to be identified containing the optical label, the auxiliary information including position information of the identification device. The location information may be various information that can be used to determine the location of the recognition device, for example, it may be GPS information, altitude information, wifi access point information, base station information, bluetooth connection information, etc. of the recognition device. Since the optical label identified by the identification device is located in the vicinity of the identification device, the location information of the identification device may reflect or may be used to determine the approximate location of the optical label. In one embodiment, other information, such as orientation information, attitude information, etc. of the identification device may also be included in the auxiliary information, which may be obtained by magnetic lines of force, gravity lines, etc. and help to more accurately determine the position of the optical label. The information on the position of the optical label may be position information of the identification device or information obtained based on the position information of the identification device. In order to obtain more accurate position information of the optical label, various relative positioning methods known in the art may be used to obtain the relative position information of the optical label and the identification device, and the position information of the optical label may be obtained based on the relative position information and the position information of the identification device. For example, the relative direction of the optical tag to the recognition device may be determined by orientation information or pose information of the recognition device and an imaging position of the optical tag in the image, the relative distance of the optical tag to the recognition device may be determined by an imaging size of the optical tag in the image and optionally other information (e.g., a focal length of a camera of the recognition device) (the larger the imaging, the closer the distance; the smaller the imaging, the farther the distance, the optical tag may generally have a default uniform size), or the relative distance of the optical tag to the recognition device may be determined by a depth camera or a binocular camera or the like mounted on the recognition device, and so on. The position information of the optical label can be obtained based on the position information of the identification device by the relative distance and the relative direction of the optical label and the identification device.
Step 503: the server selects a classification model based on the information related to the location of the optical label and the associated location information for each of the plurality of classification models.
The server may select one or more suitable classification models based on the information related to the location of the optical label and the associated location information for each of the plurality of classification models. For example, the server may determine the distance between the associated location of each classification model and the location of the optical label and select the classification model with the smallest distance, or select one or more classification models with distances less than a certain threshold.
Step 504: and the server inputs the environment image to be recognized into the selected classification model to obtain a classification output result. Specifically, the server may input the environmental image to be recognized into each selected classification model to obtain a classification output result, i.e., an optical tag ID, of each selected classification model. The classification output of each classification model may be one or more optical tag IDs, and each classification output may have an associated probability of correctness provided by the classification model.
Step 505: the server sends information about the classification output result to the identification device.
The method shown in fig. 5 further considers the position information of the optical label, and screens out an appropriate classification model in advance by using the position information of the optical label, which helps to efficiently and accurately identify the optical label in the environmental image to be identified. For example, if it is determined that the optical label is located in a certain market through the position information of the optical label, and all the optical labels in the certain market have one classification model, the server may classify the environmental image to be recognized only by using the classification model without considering other classification models, which may improve recognition efficiency and accuracy.
In one embodiment, the server may further store the photographing time information of each reference environment image, considering that environment images photographed at different times may have a difference due to a lighting condition or the like (for example, images photographed at the same location during the day and at night may have a large difference). The shooting time information may be received by the server together when the reference environment image is received, or the server may estimate the shooting time based on the reception time of the reference environment image, or may directly determine the reception time of the reference environment image as the shooting time. In this way, when creating the classification model, the server can create different classification models for reference environment images taken for different periods of time. For example, the server may divide the reference environment image into two groups of a "daytime reference environment image" and a "nighttime reference environment image" based on the photographing time of the reference environment image, and obtain one classification model using the identification information of the optical label and the daytime reference environment image, and obtain another classification model using the identification information of the optical label and the nighttime reference environment image.
In one embodiment, in the case where the server creates different classification models for reference environment images taken at different time periods, and each classification model has associated shooting time information, the method shown in fig. 6 may be performed for identification as follows:
step 601: the identification device takes an image of the environment to be identified containing the optical label.
Step 602: the identification device sends the environmental image to be identified to the server.
In this step, the recognition device may also collectively transmit information about the shooting time of the environmental image to be recognized to the server.
Step 603: the server selects a classification model based on the time information.
The server may select an appropriate classification model from the plurality of classification models based on the current time, the server may also select an appropriate classification model from the plurality of classification models based on the time at which it received the environmental image to be recognized, and the server may also select an appropriate classification model from the plurality of classification models based on the shooting time of the environmental image to be recognized that it received from the recognition device. For example, if the current time is daytime, the server may select a classification model created for the daytime reference environment image.
Step 604: and the server inputs the environment image to be recognized into the selected classification model to obtain a classification output result. Specifically, the server may input the environmental image to be recognized into the selected classification model to obtain a classification output result of the selected classification model, that is, the optical tag ID. The classification output of the classification model may be one or more optical tag IDs, and each classification output may have an associated probability of correctness provided by the classification model.
Step 605: the server sends information about the classification output result to the identification device.
The method shown in fig. 6 further considers the shooting time of the environment image, which largely avoids interference caused by different environment light conditions, and is helpful to efficiently and accurately identify the optical label in the environment image to be identified.
It will be appreciated by those skilled in the art that in one embodiment, the methods described in connection with fig. 5 and 6 may be used, i.e., both time information and location information of the optical labels may be considered in selecting the classification model.
The reference environment image may be obtained in various feasible ways. For example, one or more reference environment images containing the optical label may be taken by the installer and uploaded to the server when the optical label is installed. In one embodiment, a reference environmental image of an optical label may be provided by an identification device having optical label identification capabilities upon identification of the optical label. Fig. 7 illustrates operations performed when an optical label is identified by an identification device having optical label identification capabilities, according to one embodiment.
Step 701: the identification device identifies its identification information conveyed by the optical label. For example, the identification device may identify its identification information (ID information) conveyed by the optical label by capturing and analyzing an image of the optical label.
Step 702: the identification device selects or otherwise takes an image of the environment containing the optical label. The environment image contains imaging information of the environment around the optical label. The environment image may be an image captured in the normal capturing mode, or may be another image such as a grayscale image or a single-channel image, as long as it can represent imaging information of the environment around the optical label. In one embodiment, if an image taken by the recognition device in recognizing the identification information of the optical tag can be taken as the above-mentioned environment image, the recognition device can select the environment image therefrom. In another embodiment, the identification device may capture the environmental image before, after, or during the identification of the identification information of the optical tag. In general, the process of the identification device recognizing the identification information of the optical label and the process of capturing the environment image are performed continuously or concurrently, so that the identification device recognizes the optical label and captures the environment image at substantially the same position. It will be appreciated that the identification device does not have to be in a position to identify the optical label when the image of the environment is taken.
Step 703: the identification device sends identification information of the optical label and an environment image containing the optical label to a server.
The server, upon receiving the identification information of the optical label and the environment image containing the optical label from the identification device, may store the environment image of the optical label in association with the identification information in a database as a reference environment image. These reference environment images may be from the same or different recognition devices. In one embodiment, the recognition device may upload the photographing time information of the environment image together when uploading the environment image to the server.
It should be noted that when the identification device with optical tag identification capability identifies the optical tag, it is not necessary to capture and transmit the environment image containing the optical tag every time, for example, an application program on the identification device may instruct some identification devices to capture and transmit the environment image containing the optical tag at some timing. Furthermore, the server does not have to store all the received environment images, but may filter out some of the environment images for storage. For example, the server may screen out and store an environment image with high imaging quality; alternatively, the server may compare the current ambient image with the stored reference ambient images of the associated optical labels before storing the current ambient image, and if the current ambient image is found to be similar to any of the previously stored reference ambient images after the comparison, the current ambient image is no longer stored. In one embodiment, the server may delete some reference environment images, e.g., possibly outdated reference environment images taken long ago, where appropriate.
The position information of the optical label can be obtained in various feasible ways. For example, the installer may upload the location information of the optical label to the server when the optical label is installed. In one embodiment, the location information of the optical label may be provided by an identification device having optical label identification capability after identifying the optical label. In particular, an identification device with optical label identification capability may additionally obtain auxiliary information when identifying an optical label, the auxiliary information comprising position information of the identification device. The location information may be various information that can be used to determine the location of the recognition device, for example, it may be GPS information, altitude information, wifi access point information, base station information, bluetooth connection information, etc. of the recognition device. Since the optical label identified by the identification device is located in the vicinity of the identification device, the location information of the identification device may reflect or may be used to determine the approximate location of the optical label. In one embodiment, other information, such as orientation information, attitude information, etc. of the identification device may also be included in the auxiliary information, which may be obtained by magnetic lines of force, gravity lines, etc. and help to more accurately determine the position of the optical label.
An identification device with optical tag identification capability may send information about the location of the optical tag to a server. The information on the position of the optical label may be position information of the identification device or information obtained based on the position information of the identification device. In order to obtain more accurate position information of the optical label, various relative positioning methods known in the art may be used to obtain the relative position information of the optical label and the identification device, and the position information of the optical label may be obtained based on the relative position information and the position information of the identification device. For example, the relative direction of the optical tag to the recognition device may be determined by orientation information or pose information of the recognition device and an imaging position of the optical tag in the image, the relative distance of the optical tag to the recognition device may be determined by an imaging size of the optical tag in the image and optionally other information (e.g., a focal length of a camera of the recognition device) (the larger the imaging, the closer the distance; the smaller the imaging, the farther the distance; actual physical size information of the optical tag may be obtained from a server, or the optical tag may have a default uniform size), or the relative distance of the optical tag to the recognition device may be determined by a depth camera or a binocular camera or the like installed on the recognition device, and so on. The position information of the optical label can be obtained based on the position information of the identification device by the relative distance and the relative direction of the optical label and the identification device. The identification device having the optical tag identification capability may transmit information about the position of the optical tag to the server together with the identification information of the optical tag and the environment image including the optical tag when transmitting the identification information of the optical tag to the server.
The server, upon receiving the identification information of the optical label, the information related to the position of the optical label, and the image of the environment containing the optical label, may store it in a database. An example table structure is as follows:
location information of optical Label ID1 optical Label ID1 Environment image List of optical Label ID1
Location information of optical Label ID2 optical Label ID2 Environment image List of optical Label ID2
Location information of optical Label ID3 optical Label ID3 Environment image List of optical Label ID3
……
The server may receive a plurality of location information of an optical label from one or more identification devices having optical label identification capability, and the location information is likely to be different (because the location information is the approximate location information of the optical label). The server does not need to store all of this location information. For example, in one embodiment, the server may store only the mean of these location information in a database; in another embodiment, the server may first store the received first position information of an optical label and adjust the stored position information of the optical label based on the subsequently received position information of the optical label.
In one embodiment, considering that the environmental images taken at different times may have differences due to lighting conditions or the like (for example, images taken at day and night may have large differences), the photographing time information of each environmental image may be further stored in the above table structure, and the photographing time information may be used to obtain a more accurate recognition result in the manner described in the foregoing.
The identification device referred to herein may be a device carried by a user (e.g., a cell phone, a tablet, smart glasses, a smart watch, etc.), but it is understood that the identification device may also be a machine capable of autonomous movement, e.g., a drone, an unmanned automobile, a robot, etc., on which an image capture device, such as a camera, is mounted.
The following illustrates a process of identifying an optical label according to one embodiment of the present invention.
Fig. 8 shows ID information (Lid1-Lid6) of 6 photo-tags stored in a database, and 6 reference environment images respectively including the 6 photo-tags, where Lid1 ═ Mart, Lid2 ═ Ourhours, Lid3 ═ Starbucks, Lid4 ═ Station, Lid5 ═ burgrking, and Lid6 ═ cofee. These reference environment images may be provided by an identification device having optical label identification capabilities. For simplicity, only one reference ambient image is shown for each light label in this example, but it will be appreciated that storing multiple reference ambient images for each light label is feasible and helps to improve the accuracy of the classification model. The server may create a classification model based on these light tag IDs and the reference environment image.
Fig. 9 shows 6 environment images taken by an identification device without optical label identification capability, and results obtained after classifying the environment images using a classification model. As can be seen from the results of fig. 9, the identification method of the present invention can accurately identify the optical label included in the environment image captured by the identification device.
In one embodiment of the invention, the invention may be implemented in the form of a computer program. The computer program may be stored in various storage media (e.g., hard disk, optical disk, flash memory, etc.), which when executed by a processor, can be used to implement the methods of the present invention.
In another embodiment of the invention, the invention may be implemented in the form of an electronic device. The electronic device comprises a processor and a memory in which a computer program is stored which, when being executed by the processor, can be used for carrying out the method of the invention.
References herein to "various embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., indicate that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in one embodiment," or "in an embodiment," or the like, in various places throughout this document are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, a particular feature, structure, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, structure, or characteristic of one or more other embodiments without limitation, as long as the combination is not logical or operational. Expressions like "according to a" or "based on a" appearing herein are meant to be non-exclusive, i.e. "according to a" may cover "according to a only", and also "according to a and B", unless specifically stated or clearly known from the context, the meaning is "according to a only". The various steps described in the method flow in a certain order do not have to be performed in that order, rather the order of execution of some of the steps may be changed and some steps may be performed concurrently, as long as implementation of the scheme is not affected. Additionally, the various elements of the drawings of the present application are merely schematic illustrations and are not drawn to scale.
Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be within the spirit and scope of the invention. Although the present invention has been described by way of preferred embodiments, the present invention is not limited to the embodiments described herein, and various changes and modifications may be made without departing from the scope of the present invention.

Claims (13)

1. An identification method of an optical communication device, comprising:
receiving an environmental image to be identified containing the optical communication device, which is shot by an identification device;
inputting the environmental image to be recognized into a classification model to obtain a classification output result, wherein the classification model is obtained using pre-stored training data, the training data including identification information of a plurality of optical communication devices and a reference environmental image, and wherein the classification output result is identification information of the optical communication devices; and
transmitting information about the classification output result to the recognition device.
2. A recognition method according to claim 1, wherein one or more classification models are obtained using the training data.
3. The identification method of claim 2, wherein each class in each classification model corresponds to one optical communication device.
4. The identification method of any of claims 1-3, wherein the training data further includes location information of the plurality of optical communication devices.
5. The identification method of claim 4, wherein the training data is divided into sets based on the location information, the optical communication devices in each set having proximate geographic locations, and a classification model is trained for each set, the classification model having associated location information.
6. The identification method of claim 5, further comprising:
receiving information relating to the location of the optical communication device from the identification apparatus; and
selecting a classification model from a plurality of classification models into which the environmental image to be recognized is to be input based on the information relating to the position of the optical communication apparatus and the associated position information of each of the plurality of classification models.
7. The identification method according to any one of claims 1 to 3, wherein capturing time information of the reference environment image is further included in the training data.
8. The recognition method according to claim 7, wherein the training data is divided into a plurality of sets based on the capturing time information of the reference environment image, and a classification model having associated capturing time information is trained for each set.
9. The identification method of claim 8, further comprising:
selecting a classification model to be input by the environmental image to be recognized from a plurality of classification models based on information about a photographing time of the environmental image to be recognized and associated photographing time information of each of the plurality of classification models.
10. A recognition method according to any one of claims 1-3, wherein said pre-stored training data is obtained by:
receiving identification information of an optical communication device and an environment image containing the optical communication device from an identification device with identification capability of the optical communication device; and
storing the environment image as a reference environment image in association with the identification information.
11. The identification method of claim 10, further comprising:
receiving information about the position of the optical communication device and/or information about the capturing time of the reference environment image from a recognition apparatus having an optical communication device recognition capability.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, is operative to carry out the method of any one of claims 1-11.
13. An electronic device comprising a processor and a memory, in which a computer program is stored which, when being executed by the processor, is operative to carry out the method of any one of claims 1-11.
CN201910237504.8A 2019-03-27 2019-03-27 Identification method of optical communication device and corresponding electronic equipment Pending CN111753578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910237504.8A CN111753578A (en) 2019-03-27 2019-03-27 Identification method of optical communication device and corresponding electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910237504.8A CN111753578A (en) 2019-03-27 2019-03-27 Identification method of optical communication device and corresponding electronic equipment

Publications (1)

Publication Number Publication Date
CN111753578A true CN111753578A (en) 2020-10-09

Family

ID=72671262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910237504.8A Pending CN111753578A (en) 2019-03-27 2019-03-27 Identification method of optical communication device and corresponding electronic equipment

Country Status (1)

Country Link
CN (1) CN111753578A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239034A (en) * 2014-08-19 2014-12-24 北京奇虎科技有限公司 Occasion identification method and occasion identification device for intelligent electronic device as well as information notification method and information notification device
CN104268498A (en) * 2014-09-29 2015-01-07 杭州华为数字技术有限公司 Two-dimension code recognition method and terminal
CN105046186A (en) * 2015-08-27 2015-11-11 北京恒华伟业科技股份有限公司 Two-dimensional code recognition method and device
CN105718840A (en) * 2016-01-27 2016-06-29 西安小光子网络科技有限公司 Optical label based information interaction system and method
CN106446883A (en) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 Scene reconstruction method based on light label
CN107571867A (en) * 2017-09-05 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
WO2018041136A1 (en) * 2016-08-30 2018-03-08 陕西外号信息技术有限公司 Optical communication device and system and corresponding information transferring and receiving method
US9980100B1 (en) * 2017-08-31 2018-05-22 Snap Inc. Device location based on machine learning classifications
JP2018134051A (en) * 2017-02-23 2018-08-30 大学共同利用機関法人情報・システム研究機構 Information processing device, information processing method and information processing program
CN108681667A (en) * 2018-04-02 2018-10-19 阿里巴巴集团控股有限公司 A kind of unit type recognition methods, device and processing equipment
CN108710847A (en) * 2018-05-15 2018-10-26 北京旷视科技有限公司 Scene recognition method, device and electronic equipment
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
WO2019000461A1 (en) * 2017-06-30 2019-01-03 广东欧珀移动通信有限公司 Positioning method and apparatus, storage medium, and server
CN109242017A (en) * 2018-08-30 2019-01-18 杨镇蔚 Intelligent identification Method, device and the equipment of object information

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104239034A (en) * 2014-08-19 2014-12-24 北京奇虎科技有限公司 Occasion identification method and occasion identification device for intelligent electronic device as well as information notification method and information notification device
CN104268498A (en) * 2014-09-29 2015-01-07 杭州华为数字技术有限公司 Two-dimension code recognition method and terminal
CN105046186A (en) * 2015-08-27 2015-11-11 北京恒华伟业科技股份有限公司 Two-dimensional code recognition method and device
CN105718840A (en) * 2016-01-27 2016-06-29 西安小光子网络科技有限公司 Optical label based information interaction system and method
WO2018041136A1 (en) * 2016-08-30 2018-03-08 陕西外号信息技术有限公司 Optical communication device and system and corresponding information transferring and receiving method
CN106446883A (en) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 Scene reconstruction method based on light label
JP2018134051A (en) * 2017-02-23 2018-08-30 大学共同利用機関法人情報・システム研究機構 Information processing device, information processing method and information processing program
WO2019000461A1 (en) * 2017-06-30 2019-01-03 广东欧珀移动通信有限公司 Positioning method and apparatus, storage medium, and server
US9980100B1 (en) * 2017-08-31 2018-05-22 Snap Inc. Device location based on machine learning classifications
CN107571867A (en) * 2017-09-05 2018-01-12 百度在线网络技术(北京)有限公司 Method and apparatus for controlling automatic driving vehicle
CN108681667A (en) * 2018-04-02 2018-10-19 阿里巴巴集团控股有限公司 A kind of unit type recognition methods, device and processing equipment
CN108710847A (en) * 2018-05-15 2018-10-26 北京旷视科技有限公司 Scene recognition method, device and electronic equipment
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109242017A (en) * 2018-08-30 2019-01-18 杨镇蔚 Intelligent identification Method, device and the equipment of object information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王勇杰 等: "三维纹理图像特征准确识别技术仿真研究", 计算机仿真, vol. 29, no. 05, 15 May 2012 (2012-05-15) *
马文杰;贺立源;刘华波;李翠英;: "成像环境因素对烟叶图像采集结果的影响及校正研究", 中国农业科学, no. 12, 20 December 2006 (2006-12-20) *

Similar Documents

Publication Publication Date Title
Chen et al. Crowd map: Accurate reconstruction of indoor floor plans from crowdsourced sensor-rich videos
US9842282B2 (en) Method and apparatus for classifying objects and clutter removal of some three-dimensional images of the objects in a presentation
CN102749072B (en) Indoor positioning method, indoor positioning apparatus and indoor positioning system
US10977525B2 (en) Indoor localization using real-time context fusion of visual information from static and dynamic cameras
US20210097103A1 (en) Method and system for automatically collecting and updating information about point of interest in real space
CN104520828B (en) Automatic media is issued
CN104284092A (en) Photographing method, intelligent terminal and cloud server
CN111323024B (en) Positioning method and device, equipment and storage medium
JP6990880B2 (en) Proactive input selection for improved image analysis and / or processing workflows
CN102638657A (en) Information processing apparatus and imaging region sharing determination method
CN105653676A (en) Scenic spot recommendation method and system
WO2019214640A1 (en) Optical label network-based navigation method and corresponding computing device
JP5330606B2 (en) Method, system, and computer-readable recording medium for adaptively performing image matching according to circumstances
CN112926461A (en) Neural network training and driving control method and device
CN112631333B (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
KR20190029411A (en) Image Searching Method, and Media Recorded with Program Executing Image Searching Method
CN112788443B (en) Interaction method and system based on optical communication device
CN111753578A (en) Identification method of optical communication device and corresponding electronic equipment
CN113821033B (en) Unmanned vehicle path planning method, unmanned vehicle path planning system and terminal
KR20200048414A (en) Selfie support Camera System Using Augmented Reality
TWI741588B (en) Optical communication device recognition method, electric device, and computer readable storage medium
CN112581630A (en) User interaction method and system
CN111753580A (en) Identification method of optical communication device and corresponding electronic equipment
CN114071003B (en) Shooting method and system based on optical communication device
KR102483388B1 (en) Method for processing omnidirectional image and server performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination