WO2016146024A1 - Object recognition method and device, and indoor map generation method and device - Google Patents

Object recognition method and device, and indoor map generation method and device Download PDF

Info

Publication number
WO2016146024A1
WO2016146024A1 PCT/CN2016/076125 CN2016076125W WO2016146024A1 WO 2016146024 A1 WO2016146024 A1 WO 2016146024A1 CN 2016076125 W CN2016076125 W CN 2016076125W WO 2016146024 A1 WO2016146024 A1 WO 2016146024A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
webpage
items
information
identification
Prior art date
Application number
PCT/CN2016/076125
Other languages
French (fr)
Chinese (zh)
Inventor
聂华闻
Original Assignee
北京贝虎机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201510110320.7 priority Critical
Priority to CN201510110320.7A priority patent/CN106033435B/en
Application filed by 北京贝虎机器人技术有限公司 filed Critical 北京贝虎机器人技术有限公司
Publication of WO2016146024A1 publication Critical patent/WO2016146024A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Abstract

An object recognition method and device, and indoor map generation method and device. The object recognition method comprises: continuously grabbing and updating information on an Internet webpage to establish an object characteristic library; and matching and recognizing indoor objects based on the object characteristic library, thus addressing the technical problem of a low accuracy of object recognition in the prior art, and effectively increasing an accuracy in object recognition. In addition, an indoor map is generated via the object recognition method for indoor object positioning, thus effectively increasing an accuracy in positioning.

Description

Item identification method and device, indoor map generation method and device

The present application claims priority to Chinese Patent Application No. 20151011032, filed on Mar. 13, 2015, the title of in.

Technical field

The present invention relates to the field of computers, and in particular, to an object identification method and apparatus, and an indoor map generation method and apparatus.

Background technique

With the continuous development of IoT devices and smart devices, the research and development of item identification has also made great progress. At present, the commonly used item identification method is to match the items based on the item feature library to realize the identification of the items. Therefore, the identification is accurate. The degree of degree depends largely on the feature quantity of the item feature library.

At present, the main methods for building the item feature library are as follows:

1) Manually input the attributes of each comparison sample, because the sample size and feature quantity of the artificial input feature are particularly effective, so the feature quantity in the created item feature library is small;

2) Input more images, then use the self-learning method to build the library, and then the sample size based on this study is still very limited, it is difficult to meet the needs of people for high-precision object recognition.

Further, in the actual society, the number and type of items and the growth rate of items in the family or indoor environment are very amazing. The way of manual input and the way of inputting images can only be applied in a limited specific use environment. Actually used on a large scale for item identification in various environments.

In view of the technical problems that the existing item identification accuracy is low and cannot be applied on a large scale, an effective solution has not been proposed yet.

Summary of the invention

This Summary is provided to introduce a selection of concepts in the <RTIgt; The summary is not intended to identify key features or essential features of the claimed subject matter, and is not intended to be used in any way to limit the claimed subject matter.

An embodiment of the present invention provides an item identification method to solve the problem that the item identification accuracy in the prior art is not high. The existing information collection methods have technical problems that are limited, and the methods include:

Continuously crawling and updating the information in the Internet webpage to establish an item feature database;

Matching identification of indoor items based on the item feature library.

In one embodiment, the continuously crawling update learning on the information in the Internet webpage to create an item feature library includes:

Extract a web page;

Finding a webpage model that matches the webpage, wherein the webpage model identifies information carried by each page area in the webpage;

Based on the matched webpage model, the item name and the item feature of the corresponding item of the webpage are identified.

In one embodiment, the web page model is built in one of the following ways:

Clustering analysis of visual models of all web pages in the same website to obtain multiple webpage models in the website; or

The information carried in each page area of the webpage is determined according to the user experience to establish a webpage model.

In one embodiment, the persistent crawling update learning is performed on the information in the Internet webpage to create an item feature library, including:

Extracting a web page to obtain a webpage organization code of the webpage;

An item name and an item feature of the corresponding item of the web page are extracted from the webpage organization code.

In an embodiment, extracting, from the webpage organization code, an item feature of the webpage corresponding item, including:

Determining structured information of the webpage organization code;

Determining, according to the structured information, a start string symbol and an end string symbol of each extracted item in the webpage organization code;

And according to the start string symbol and the end string symbol of each of the extracted items, the item name and the item feature of the corresponding item of the web page are obtained from the webpage organization code.

In one embodiment, the item feature comprises at least one of: a shape parameter, a volume parameter, a material parameter, a weight parameter.

In one embodiment, the information of the Internet webpage includes: information content displayed on a webpage of the article introduction website, and/or an information file of the article introduction information of the article introduction website.

The embodiment of the invention further provides an indoor map generation method, which solves the technical problem that the accuracy of the indoor map is not high because the item recognition accuracy is not high in the prior art, and the method includes:

The intelligent identification and information processing device acquires a panoramic image of an area of the indoor map to be generated;

Based on the item identification method, a plurality of relatively independent items are identified from the panoramic image, and the identified item features of the individual items are obtained from the established item feature library;

Determining, according to an image processing technique, an association relationship between distances and orientations of respective items in the panoramic image according to the acquired item characteristics of the individual items;

A map of the area of the indoor map to be generated is generated according to the relationship between the distances and orientations of the respective items.

In one embodiment, in the coordinate system of the map, the relationship between the distances and orientations of the individual items is stored in the form of a location linked list.

In one embodiment, the item characteristics of the identified individual items are obtained from the established item feature library, including:

Obtaining a volume attribute parameter of the identified item from the item feature library, wherein the volume attribute parameter comprises: length, width, and height of the item.

The embodiment of the invention further provides an item identification device, which solves the technical problem that the accuracy of item identification in the prior art is not high, and the device comprises:

a feature library building module is configured to continuously crawl and update information in the Internet webpage to establish an item feature database;

The identification module is configured to perform matching and identification on the indoor items based on the item feature library.

The embodiment of the invention further provides an indoor map generating device, which is located in the intelligent identification and information processing device, so as to solve the technical problem that the accuracy of the indoor map is not high because the item identification accuracy is not high in the prior art. , the device includes:

a panoramic image generating module, configured to acquire a panoramic image of an area of the indoor map to be generated;

The item identification device is configured to identify a plurality of relatively independent items from the panoramic image, and acquire the identified item features of the individual items from the established item feature library;

An association determining module, configured to determine, according to an image processing technique, an association relationship between distances and orientations of each item in the panoramic image according to the acquired item features of the individual items;

The indoor map generating module is configured to generate a map of the area of the indoor map to be generated according to the relationship between the distance and the orientation of the respective items.

In the embodiment of the present invention, the item feature database is established by performing persistent crawl update learning on the webpage in the Internet, thereby realizing matching and identification of indoor objects, because the amount of data existing in the Internet of Things is huge, so that the article can be made The information in the feature library is more comprehensive, so that the accuracy of item identification in the prior art is not effectively solved. The technical problem has achieved the technical effect of effectively improving the accuracy of item identification, and can meet the rapid development of the item type, greatly improving the scope of use of the item identification method.

DRAWINGS

The drawings described herein are provided to provide a further understanding of the invention, and are not intended to limit the invention. In the drawing:

FIG. 1 is a schematic diagram of an application scenario according to an embodiment of the present invention; FIG.

2 is a flow chart of a method for identifying an item and a method for generating an indoor map according to an embodiment of the present invention;

3 is a schematic diagram of model extraction of a product introduction webpage according to an embodiment of the present invention;

4 is a schematic diagram of a position chain table storing association relationships between distances and orientations of respective items according to an embodiment of the present invention;

FIG. 5 is a block diagram showing the structure of an indoor map generating apparatus according to an embodiment of the present invention.

detailed description

In order to make the objects, technical solutions and advantages of the present invention more comprehensible, the present invention will be further described in detail with reference to the embodiments and drawings. The illustrative embodiments of the present invention and the description thereof are intended to explain the present invention, but are not intended to limit the invention.

It should be understood that any examples herein are non-limiting. Therefore, the invention is not limited to any specific embodiments, aspects, concepts, structures, functions or examples described herein. On the contrary, any one embodiment, aspect, structure, function, or example described herein is limited.

Features described and/or illustrated with respect to one embodiment or example may be used in the same manner or in a similar manner in one or more other embodiments or examples, and/or combined with features of other embodiments or examples or Instead of the features of other embodiments or examples.

It should be emphasized that the words "including", "based", or "comprising", when used in the specification, are used to mean the presence of the recited features, elements, steps or components, but do not exclude one or more The presence or addition of features, elements, steps, components, or a combination thereof.

Reference is first made to Fig. 1, which shows an application scenario in which embodiments of the present invention may be implemented. The scenario shown in FIG. 1 includes a terminal 100, an item to be identified 200, and an Internet 300. The terminal 100 can be a mobile terminal, such as a mobile electronic device such as a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a robot, such as a cleaning robot, a chat robot, a security robot, and the like.

The terminal 100 can exchange information with the Internet by wire or wirelessly, or obtain information from the Internet. The terminal 100 can have a processor and an image acquisition module built in. The terminal 100 can "see" the item to be identified through the built-in image acquisition module. 200, the so-called "see" can obtain a picture of the item to be identified, or capture the image of the item to be identified by combining with the camera device, and then achieve matching identification of the item by the processor. In order to achieve matching identification of the items, it is necessary to establish an item feature library. When the item feature database is created, the information in the Internet 200 web page can be continuously crawled and updated to establish an item feature library, based on the The item feature library performs matching and identification of items. Because the sample of the update learning comes from the Internet with a large amount of resources, the update of the learned feature library is infinitely perfect, which can greatly improve the accuracy of the matching recognition.

The item identification method and the indoor map generation method of the exemplary embodiment of the present invention will be described below with reference to the application scenario of FIG. 1 and the method illustrated in FIG. 2 .

It should be noted that the above application scenarios are only shown to facilitate understanding of the spirit and principle of the present invention, and embodiments of the present invention are not limited in this respect. Rather, embodiments of the invention may be applied to any scenario that is applicable.

For example, referring to FIG. 2, a flowchart of a method for identifying an item and a method for generating an indoor map according to an embodiment of the present invention, as shown in FIG. 2, may mainly include the following steps:

Step 201: Perform continuous crawling update learning on information in the Internet webpage to establish an item feature database.

That is, the item feature database is trained based on the infinite network resources on the Internet, so that the updated feature library is infinitely perfected to improve the accuracy of the matching recognition. Specifically, the persistent crawling update learning of the webpage information may be performed by one or more of the following methods:

1) extracting a webpage to find a webpage model matching the webpage, wherein the webpage model identifies information carried in each page area of the webpage; and based on the matched webpage model, identifying the article name of the corresponding article of the webpage and Item characteristics.

That is, considering that for the same website, the same type of webpage, the information type corresponding to each area of the webpage is the same, therefore, the same type of webpage can be clustered and analyzed to obtain the representative of each area of the webpage. Information, in this way, when extracting information, it is possible to extract according to the data information corresponding to the region and the region.

In the specific implementation, the webpage model may be established in the following manner: clustering and analyzing visual models of all webpages in the same website to obtain multiple webpage models in the webpage; or determining webpage areas in the webpage according to user experience The information carried to build a web page model.

For example, as shown in Figure 3, a web page can be divided into multiple areas by visual proximity, and then The positional relationship of the regions is arranged and outputted, and the plurality of regions are combined to form a webpage model corresponding to the webpage. Taking a shopping website as an example, as shown in FIG. 3, a schematic diagram of model extraction of a product introduction webpage is shown. In the product introduction webpage, the top area is the name of the item, then the price introduction area, and then some items. The parameter is introduced, followed by the image display area of the item, and the right side is some advertising information. Then, the product webpage of the website can be clustered and analyzed, and the webpage model corresponding to the webpages can be obtained (the information carried in each area is represented by the coordinate area in FIG. 3 to realize the webpage model). The identification), in the subsequent web crawling learning, can determine the specific data information carried in each area according to the generated webpage model. In the specific implementation, the identification may be performed in other ways than the coordinate representation. For example, the classification result may be obtained according to the URL rule, the webpage structure, the model intersection, and the like.

2) Extracting a webpage, obtaining a webpage organization code of the webpage, and extracting an item name and an item feature of the corresponding item of the webpage from the webpage organization code.

That is, because the webpage organization code carries various information of the webpage, and the webpage organization code has detailed rules composed of information, if the webpage organization code is obtained, the data information in the webpage can be restored.

In a specific implementation, the item feature of the webpage corresponding item may be extracted from the webpage organization code in the following manner: determining the structured information of the webpage organization code, and determining the start of each extraction item in the webpage organization code according to the structured information. The string symbol and the end string symbol, and then according to the start string symbol and the end string symbol of each extracted item, obtain the item name and the item of the corresponding item of the webpage from the webpage organization code, for example: extracting by manual template, inputting The webpage of a webpage organizes the code, and then extracts the template according to the code and the URL matching corresponding information, and extracts the structured information through the template. The so-called template is: defining the string number and the end string symbol of the beginning of each extracted item. In the specific implementation, the above webpage organization code may be an html code, or an xml code, a javascript code, etc., which can organize the web code, and the specific code is used to organize the webpage, and the application is not specifically limited, and may be required according to requirements. select. Take the webpage organization code as the html code as an example. Suppose, given a webpage, the webpage is: http://item.jd.com/1158180.html, and only need to obtain the corresponding product name from its corresponding webpage organization code. The starting string (excluding []) is the string [</li>] at the end of [product name:], and the tag label for gross weight is: [product gross weight:] end tag [</li>], By analogy, you can manually sort out the detailed parameters of all the target items extracted from the Internet.

In various embodiments described above, the item features may include, but are not limited to, at least one of: a shape parameter, a volume parameter, a material parameter, a weight parameter, and the like.

Because the established feature database is more perfect, the results are more accurate and reliable. For example, it can be imagined that there are 1000 samples trained to obtain the feature database, which is inevitably less than the training of 20,000 samples. The signature library is accurate. Although the accuracy will not increase by 20 times, the accuracy of training from 1000 samples to 20,000 samples must be greatly improved.

The information about the Internet webpage mentioned above may be: the information content displayed on the webpage of the article introduction website (for example, B2B, B2C, or the information directly displayed on the webpage of the website such as the product introduction website), or It is an information file of the article information of the article introduction site (for example, a pdf file, a video file, etc. linked to the article for linking the article). However, it is to be noted that the types of the websites mentioned above and the types of the files are only for the purpose of better illustrating the present invention, and other types of websites or other types of data files may be used, and the present application is not limited herein, as long as it is Web pages and data files that obtain item information can be used as objects for e-learning.

Further, the inventor considers that the accuracy and efficiency of the robot in understanding the real environment and surrounding objects in the prior art is mainly because the current robot is based on the ordinary two when generating the indoor map. Dimensional coordinates, for example, when the robot needs to find an item to be located, it can first determine the object that is closer to itself. At this time, find the coordinates of the relatively close object from the map, and then compare this nearer. The coordinates of the object are compared with the coordinates of the item to be positioned to determine the approximate location of the item to be located. That is, it is positioned by such absolute coordinates. However, for ordinary people's thinking, it is not based on absolute coordinates, but based on relative coordinates, that is, when identifying what is in front of the object, for example, seeing the front of the refrigerator, you can determine the approximate location of the TV. That is, the approximate position of the TV relative to the refrigerator, that is to say, the general thinking of the person is based on the relative coordinates and orientation.

Step 202: The terminal (for example, the smart identification and information processing device, which may be a robot, but not limited to a robot, which is represented by a smart device hereinafter) may perform image scanning at any time when moving in an area of the map to be generated. To obtain multiple images of the area;

Step 203: splicing a panoramic image of the area according to multiple images obtained by image scanning;

Step 204: Then, by using the above-mentioned establishment feature library, a plurality of relatively independent items in the area may be identified from the panoramic image;

Specifically, a neural network algorithm may be used to extract individual items (eg, television, refrigerator, sofa, table, etc.) from the panoramic image, and match the pre-established item feature library storing the feature information of the plurality of items. Basically, the item category of each item is determined, that is, based on the formed panoramic image, the item and the category of the item are identified, for example, the model of the television and the television, the model of the refrigerator and the refrigerator, and the like are identified from the indoor panoramic image.

Step 205: Obtain a volume attribute parameter of each of the plurality of relatively independent items from the feature library established above;

Step 206: Determine an association relationship between distances and orientations of the respective items from the panoramic image according to the obtained volume attribute parameter of each item according to the image processing technology to generate a map of the area.

That is, in order to accurately determine the distance between the items, the volume attribute parameters (eg, length, width, height) of the item may be determined based on the feature library of the item, and further, during the image scanning process, the record has been recorded. The resolution of the camera during image scanning and the attitude of the device during image scanning. Therefore, by comparing the two images of the same item, the physical distance between the camera and the object in the image can be determined, and then the volume attribute parameter of the item can be further determined. The precise distance between the item and the item is derived, thereby ultimately forming an indoor map based on the distance and orientation relationship between the item and the item.

Considering that the items in the map are directly related to the items, the relationship between the distances and orientations of the items can be stored in the form of a location linked list, as shown in FIG. 4, which is an example of a position linked list. The identified refrigerator, sofa, TV, etc. are all regarded as the point of gravity, that is, the determined distance is the position between the center of gravity of the two items, and the robot is facing the front center position of the current item as the reference point. Determine the distance and bearing of another item.

Step 207: When performing indoor object positioning based on the above-mentioned identification map, the smart device (for example, a robot) may perform an image scan at the current position, and identify at least one item from the scanned image;

Step 208: Find, according to the identified at least one item, the distance and orientation of the item to be positioned relative to the identified item from the map established based on the relationship between the distance and the orientation of each item;

Step 209: Determine, according to the scanned image and the volume attribute parameter of the identified item, the distance and orientation of the smart device relative to the identified item;

Step 210: Determine a distance and an orientation of the item to be positioned relative to the smart device according to a distance and an orientation of the item to be positioned relative to the identified item, and a distance and an orientation of the smart device relative to the identified item. .

This makes the smart device's thinking when positioning the indoor object closer to the human mind, that is, if the person stands in the house and thinks about the location of the refrigerator, then he only needs to determine the refrigerator relative to the present based on the items in front of the face. The distance and orientation of the position is very different from the map system based on mathematical (similar to XYZ) coordinates, which can significantly improve the ease of human-computer interaction. This makes it easy to communicate and communicate with the smart device. The smart device is a robot. For example, you can tell the robot to go to the refrigerator to take the bottle of water to the coffee table in the living room. Then the robot just takes a look ( It can be an image scan of the front side, identify the item in front of it, and determine the orientation and distance between itself and the item in front of the object, and then determine the distance and orientation between the refrigerator and the item in front of the map, and the robot can move accurately. Go to the refrigerator location.

In general, the inventor considered that the reason why the existing item identification method is not high in recognition accuracy is that there are too few features in the artificially set item feature library, and the sample size in the item feature library trained by the sample is too less. To this end, the inventor thinks that there are many resources in the Internet, and it can be said that all the data that is desired to be acquired is covered. Therefore, the item feature library can be established through continuous crawling and learning update of the webpage of the Internet, so that the item feature library can be realized. More accurate item identification. Based on the above for item identification, in this example, a high-precision indoor map generation method is also proposed, which is based on the distance and orientation between the items and the items to generate an indoor map, and based on the map, only the self is identified. An item in the area can intuitively and effectively determine the distance and position of other items relative to the item, thereby realizing the positional positioning closer to the human mind.

In the above embodiment, a method for self-learning to obtain an item feature database based on Internet data to realize item identification, and an indoor item positioning and identification and indoor map drawing method based on the item identification method is proposed, and the smart device is innovatively solved. In the process of human-computer interaction with a general person, the intelligent device recognizes the accuracy and efficiency of the recognition of the real environment and surrounding objects, and can form a person according to the positional relationship between the identified items. An indoor coordinate map system based on items that can be read directly by the machine.

Because it is based on the self-learning of Internet data, it solves the problem that the existing smart devices need to manually pre-create the feature comparison library for indoor item identification, and manually specify the attributes of each comparison sample, but in the actual society, the family or indoor environment, The number and type of items and the growth rate of the items are very amazing. The methods of manually identifying and training the identification items can only be applied in a limited specific use environment, and cannot be used in large scale in various indoor environments. Problems, which can satisfy the accurate identification of more and more items, so that item identification can be applied to a wider range of fields, because the data in the Internet of Things is many, through the retrieval and learning of data in the network, Can greatly improve the accuracy and effectiveness of data identification.

Based on the same inventive concept, an indoor map generating device is also provided in the embodiment of the present invention, as described in the following embodiments. Since the principle of solving the problem by the indoor map generating device is similar to the indoor map generating method, the implementation of the indoor map generating device can be referred to the implementation of the indoor map generating method, and the repeated description will not be repeated. As used hereinafter, the term "unit" or "module" may implement a combination of software and/or hardware of a predetermined function. Although the apparatus described in the following embodiments is preferably implemented in software, hardware, or a combination of software and hardware, is also possible and contemplated. FIG. 5 is a structural block diagram of an indoor map generating apparatus according to an embodiment of the present invention. The indoor map generating apparatus is located in an intelligent identification and information processing apparatus. As shown in FIG. 5, the method may include:

The panoramic image generating module 501 can be configured to acquire a panoramic image of an area of the indoor map to be generated;

The item identification device 502 is connected to the panoramic image generation module 501, and can be used to identify a plurality of relatively independent items from the panoramic image, and acquire the identified individual items from the established item feature library. Product characteristics

The association determining module 503 is connected to the item identification device 502, and can be configured to determine, according to the image processing technology, the relationship between the distance and the orientation of each item in the panoramic image according to the acquired item characteristics of the individual items;

The indoor map generating module 504 is connected to the association determining module 503, and can be configured to generate a map of the area of the indoor map to be generated according to the relationship between the distance and the orientation of the respective items.

The connection between the above modules may be a wired connection or a wireless connection, which is not limited in this application.

Specifically, as shown in FIG. 5, the item identification device 502 may include: a feature library establishing module 5021, configured to perform continuous crawl update learning on a webpage in the Internet to establish an item feature library; and an identification module 5022, configured to Matching identification of indoor items based on the item feature library.

During the specific implementation, the feature library building module 5021 can continuously perform continuous crawling update learning on the pictures of the online product webpages and the corresponding product names and related attribute parameters, thereby making the feature features in the feature library more and more abundant; The panoramic image generating module 501 forms a panoramic map image in the room according to the image obtained by the image scanning, which needs to rely on the camera to acquire the image, and also needs to rely on the image synthesis and rendering technology; the recognition module 5022 recognizes the panoramic map image in the room. The individual items, as well as the categories of the individual items, can be accurately identified by the convolutional neural network method, and the identification process requires the use of the item feature library established by the feature library building module 5021; the indoor map generation module 504 According to the distance and orientation relationship between the respective items determined by the association determining module 503, an indoor map generating module based on the tube between the articles and the articles is generated, that is, an indoor map system based on the articles is formed.

For the identification of indoor items, it is mainly based on the item identification and positioning of the above indoor map system. If it is applied to the communication with the robot, the communication efficiency and communication accuracy will be greatly improved, and the robot only needs to recognize the surrounding random. Objects can directly determine the orientation and distance of other items. For example, an indoor map built based on object recognition can support the communication of the robot with the indoor item as a reference by means of voice. For example, the robot tells you what is going wrong in the home, or what the robot is going to do there.

In this example, by accurately identifying the indoor items, a map coordinate system based on the mutual positional relationship between the items and the items is generated, so that the established map coordinates are more similar to the human cognition of the positions of the items relative to each other. Identification, that is, the formation of a map coordinate system based on indoor objects that can be read naturally by human-machines, thereby solving the problem of accurate and inefficient technical understanding of the recognition and understanding of the real environment and surrounding objects by the robot in the prior art. The technical effect of effectively improving positioning and recognition accuracy and positioning efficiency is achieved. More importantly, a machine learning method based on Internet data is proposed, which can realize the self-learning and growth of massive object recognition feature database. Greatly improve the accuracy of smart device recognition of objects.

In another embodiment, software is also provided for performing the technical solutions described in the above embodiments and preferred embodiments.

In another embodiment, a storage medium is further provided, wherein the software includes the above-mentioned software, including but not limited to: an optical disk, a floppy disk, a hard disk, an erasable memory, and the like.

Obviously, those skilled in the art should understand that the above modules or steps of the embodiments of the present invention can be implemented by a general computing device, which can be concentrated on a single computing device or distributed in multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from The steps shown or described are performed sequentially, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated into a single integrated circuit module. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.

The above description is only the preferred embodiment of the present invention, and is not intended to limit the present invention, and various changes and modifications may be made to the embodiments of the present invention. Any modifications, equivalent substitutions, improvements, etc. made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Claims (12)

  1. An item identification method, comprising:
    Continuously crawling and updating the information in the Internet webpage to establish an item feature database;
    Matching identification of indoor items based on the item feature library.
  2. The method of claim 1, wherein the continuously crawling update learning of the information in the Internet webpage to create an item feature library comprises:
    Extract a web page;
    Finding a webpage model that matches the webpage, wherein the webpage model identifies information carried by each page area in the webpage;
    Based on the matched webpage model, the item name and the item feature of the corresponding item of the webpage are identified.
  3. The method of claim 2 wherein said web page model is established in accordance with one of the following methods:
    Clustering analysis of visual models of all web pages in the same website to obtain multiple webpage models in the website; or
    The information carried in each page area of the webpage is determined according to the user experience to establish a webpage model.
  4. The method according to claim 1, wherein the persistent crawling update learning is performed on the information in the Internet webpage to create an item feature database, including:
    Extracting a web page to obtain a webpage organization code of the webpage;
    An item name and an item feature of the corresponding item of the web page are extracted from the webpage organization code.
  5. The method of claim 4, wherein extracting an item feature of the webpage corresponding item from the webpage organization code comprises:
    Determining structured information of the webpage organization code;
    Determining, according to the structured information, a start string symbol and an end string symbol of each extracted item in the webpage organization code;
    And according to the start string symbol and the end string symbol of each of the extracted items, the item name and the item feature of the corresponding item of the web page are obtained from the webpage organization code.
  6. The method of any of claims 2 to 5, wherein the item feature comprises at least one of: a shape parameter, a volume parameter, a material parameter, a weight parameter.
  7. The method according to any one of claims 1 to 5, wherein the information of the Internet webpage comprises: the information content displayed on the webpage of the article introduction website, and/or the introduction item information of the article introduction website document.
  8. An indoor map generating method, comprising:
    The intelligent identification and information processing device acquires a panoramic image of an area of the indoor map to be generated;
    The article identification method according to any one of claims 1 to 7, wherein a plurality of relatively independent items are identified from the panoramic image, and items of the identified individual items are acquired from the established item feature library. feature;
    Determining, according to an image processing technique, an association relationship between distances and orientations of respective items in the panoramic image according to the acquired item characteristics of the individual items;
    A map of the area of the indoor map to be generated is generated according to the relationship between the distances and orientations of the respective items.
  9. The method according to claim 8, wherein in the coordinate system of the map, the relationship between the distances and the orientations of the respective articles is stored in the form of a location linked list.
  10. The method of claim 8 wherein obtaining the identified item features of the individual items from the established item feature library comprises:
    Obtaining a volume attribute parameter of the identified item from the item feature library, wherein the volume attribute parameter comprises: length, width, and height of the item.
  11. An item identification device, comprising:
    a feature library building module is configured to continuously crawl and update information in the Internet webpage to establish an item feature database;
    The identification module is configured to perform matching and identification on the indoor items based on the item feature library.
  12. An indoor map generating device, which is located in an intelligent identification and information processing device, and includes:
    a panoramic image generating module, configured to acquire a panoramic image of an area of the indoor map to be generated;
    The article identification device of claim 11 for identifying a plurality of relatively independent items from the panoramic image and acquiring the identified item features of the individual items from the established item feature library;
    An association determining module, configured to determine, according to an image processing technique, an association relationship between distances and orientations of each item in the panoramic image according to the acquired item features of the individual items;
    The indoor map generating module is configured to generate a map of the area of the indoor map to be generated according to the relationship between the distance and the orientation of the respective items.
PCT/CN2016/076125 2015-03-13 2016-03-11 Object recognition method and device, and indoor map generation method and device WO2016146024A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510110320.7 2015-03-13
CN201510110320.7A CN106033435B (en) 2015-03-13 2015-03-13 Item identification method and device, indoor map generation method and device

Publications (1)

Publication Number Publication Date
WO2016146024A1 true WO2016146024A1 (en) 2016-09-22

Family

ID=56918445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076125 WO2016146024A1 (en) 2015-03-13 2016-03-11 Object recognition method and device, and indoor map generation method and device

Country Status (2)

Country Link
CN (1) CN106033435B (en)
WO (1) WO2016146024A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172442A (en) * 2017-06-11 2017-09-15 成都吱吖科技有限公司 A kind of interactive panoramic video storage method and device based on virtual reality

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503741A (en) * 2016-10-31 2017-03-15 深圳前海弘稼科技有限公司 Floristic recognition methods, identifying device and server
CN106681323A (en) * 2016-12-22 2017-05-17 北京光年无限科技有限公司 Interactive output method used for robot and the robot
CN106855946A (en) * 2016-12-27 2017-06-16 努比亚技术有限公司 A kind of image information acquisition method and apparatus
CN106709462A (en) * 2016-12-29 2017-05-24 天津中科智能识别产业技术研究院有限公司 Indoor positioning method and device
CN107958040A (en) * 2017-11-22 2018-04-24 青岛乾恒智能科技有限公司 A kind of intelligence system for indoor article positioning, management and analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567159A (en) * 2009-06-10 2009-10-28 北京豪仪测控工程有限公司 Navigation method based on image identification technology and navigation apparatus
US20110246329A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Motion-based interactive shopping environment
CN102254265A (en) * 2010-05-18 2011-11-23 北京首家通信技术有限公司 Rich media internet advertisement content matching and effect evaluation method
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276362B (en) * 2007-03-26 2011-05-11 国际商业机器公司 Apparatus and method for customizing web page
US8438080B1 (en) * 2010-05-28 2013-05-07 Google Inc. Learning characteristics for extraction of information from web pages
CN101996243A (en) * 2010-11-05 2011-03-30 我查查信息技术(上海)有限公司 Method, equipment and server for querying commodity information
CN103761654A (en) * 2014-01-06 2014-04-30 李启山 Marked commodity photographing informatization recognition anti-counterfeit technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567159A (en) * 2009-06-10 2009-10-28 北京豪仪测控工程有限公司 Navigation method based on image identification technology and navigation apparatus
US20110246329A1 (en) * 2010-04-01 2011-10-06 Microsoft Corporation Motion-based interactive shopping environment
CN102254265A (en) * 2010-05-18 2011-11-23 北京首家通信技术有限公司 Rich media internet advertisement content matching and effect evaluation method
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172442A (en) * 2017-06-11 2017-09-15 成都吱吖科技有限公司 A kind of interactive panoramic video storage method and device based on virtual reality

Also Published As

Publication number Publication date
CN106033435A (en) 2016-10-19
CN106033435B (en) 2019-08-02

Similar Documents

Publication Publication Date Title
Chang et al. Matterport3d: Learning from rgb-d data in indoor environments
AU2010326655B2 (en) Hybrid use of location sensor data and visual query to return local listings for visual query
US8421872B2 (en) Image base inquiry system for search engines for mobile telephones with integrated camera
US20080005105A1 (en) Visual and multi-dimensional search
US9165406B1 (en) Providing overlays based on text in a live camera view
US20100331043A1 (en) Document and image processing
US8254684B2 (en) Method and system for managing digital photos
Manovich How to compare one million images?
US20070237426A1 (en) Generating search results based on duplicate image detection
US10200336B2 (en) Generating a conversation in a social network based on mixed media object context
US20140003714A1 (en) Gesture-based visual search
US20110150324A1 (en) Method and apparatus for recognizing and localizing landmarks from an image onto a map
US20180046855A1 (en) Face detection and recognition
KR20120026402A (en) Method and apparatus for providing augmented reality using relation between objects
JP2010530998A (en) Image-based information retrieval method and system
US9639740B2 (en) Face detection and recognition
US9600499B2 (en) System for collecting interest graph by relevance search incorporating image recognition system
EP2015166B1 (en) Recognition and tracking using invisible junctions
CN103838566A (en) Information processing device, and information processing method
CN104685501B (en) Text vocabulary is identified in response to visual query
CN107122380A (en) Strengthen the computer implemented method and computer system of RUNTIME VIEW
CN103814351A (en) Collaborative gesture-based input language
WO2014005451A1 (en) Cloud service-based visual search method and system, and computer storage medium
US9721148B2 (en) Face detection and recognition
CN103988202A (en) Image attractiveness based indexing and searching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16764204

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16764204

Country of ref document: EP

Kind code of ref document: A1