WO2014130417A1 - Multi disciplinary engineering design using image recognition - Google Patents

Multi disciplinary engineering design using image recognition Download PDF

Info

Publication number
WO2014130417A1
WO2014130417A1 PCT/US2014/016799 US2014016799W WO2014130417A1 WO 2014130417 A1 WO2014130417 A1 WO 2014130417A1 US 2014016799 W US2014016799 W US 2014016799W WO 2014130417 A1 WO2014130417 A1 WO 2014130417A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
graphical
paired
graphical object
external
Prior art date
Application number
PCT/US2014/016799
Other languages
French (fr)
Inventor
Oswin Noetzelmann
Rami Reuveni
Victor Robert HAMBRIDGE
Marine DUREL
Tim OERTER
JR. Christopher Patrick PORTWAY
Dirk VIELSAECKER
Sarvananthan RAGAVAN
Mingjun Zhang
Daniela Stederoth
Original Assignee
Siemens Aktiengesellschaft
Siemens Product Lifecycle Management Software Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft, Siemens Product Lifecycle Management Software Inc. filed Critical Siemens Aktiengesellschaft
Publication of WO2014130417A1 publication Critical patent/WO2014130417A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/101Collaborative creation, e.g. joint development of products or services

Definitions

  • This disclosure is directed to methods for cross-domain data exchange.
  • Image recognition is commonly tailored to recognizing specific types of objects, such as faces in security software, books and other media in cell phone shopping applications, or text in the case of Optical Character Recognition (OCR). This specialization is traditionally built into the specific image recognition system, or added as a module created a priori.
  • OCR Optical Character Recognition
  • Some recent approaches combine image type specialization with after- deployment user-added tagging.
  • One example is automatic tagging in online social network systems, where a user can tag a face with a name, and subsequently future images will automatically apply the tag where the face is recognized.
  • Google has also combined automatic tagging with user input in Google Image Labeler, where the user directly inputs tags associated with image content.
  • Traditional engineering disciplines work is separated from a data point of view, and manual synchronization of discipline specific data is time consuming and error prone.
  • a typical synchronization workflow may include importing data, and selecting a library object to replace an imported object based on the imported object's image and possibly tags and classification. The user may rely on the visual representation of the imported external object and the library object to make this match.
  • FIG. 1 depicts external objects as being represented by outlined shapes, and the parameterized data as solid filled shapes.
  • the user manually matches the external and parameterized objects, one at a time.
  • a multi-disciplinary engineering system is a system that allows engineers from multiple disciplines to work on common or connected data. For example, as illustrated in FIG. 3, a factory planner can work together with management, a mechanical engineer, an electrical engineer and an automation engineer to plan a new production line for a car door assembly.
  • Data connections between the different disciplines can potentially be used to allow the system to support various functions: (1) Notification/communication between disciplines or departments; (2) Rule based change propagation; (3) Formalization of workflows, such as sign off procedures; (4) Multi-Disciplinary report generation; and (5) Usage of common interdisciplinary data structures.
  • Exemplary embodiments of the invention as described herein generally include methods for cross domain data exchange, through matching non- parameterized graphics and parameterized data which also contain graphic representations.
  • Exemplary embodiments of the invention use image recognition (IR) and optical character recognition (OCR) techniques to build multi-disciplinary objects from domain specific objects and to update a shared library.
  • IR image recognition
  • OCR optical character recognition
  • Embodiments of the invention can be implemented as an integrated layer or as an add-in, within a multidisciplinary tool or in domain specific tools.
  • a computer- implemented method of using image recognition for cross domain data exchange including receiving a set of paired graphical objects, wherein a first graphical object is read from an online library, and a second graphical object is an external graphics object, transferring tags and classification associated with the first graphical object onto one or more other external representations of said irst graphical object, and presenting a user with a collection of external graphical objects that match the tag and classification criteria transferred from the first graphical object.
  • the method includes updating said online library with one or more of said matched external graphical objects, if one or more of said matched external graphical objects are not already contained in said library.
  • the method includes updating said online library with an encapsulation of one or more of said matched external graphical objects and the first graphical object within one of a new graphical object in the library, or the original first graphical object as one or more additional representations.
  • the set of paired graphical objects were paired by a user.
  • the set of paired graphical objects were automatically paired by an image recognition program.
  • the image recognition program does not use pre-defined rules or user tags to pair said set of paired graphical objects.
  • pairing said set of paired graphical objects comprises matching a projected image of a 3-dimensional representation of the first graphical object with a 2-dimensional representation of said second graphical object.
  • the method includes receiving thresholds for recognizing similar objects, and tolerances for image recognition facets.
  • a non- transitory program storage device readable by a computer, tangibly embodying a program of instructions executed by the computer to perform the method steps for using image recognition for cross domain data exchange
  • FIG. 1 illustrates the importation of data and selection of a library object to replace an imported object, according to an embodiment of the invention.
  • FIG. 2 illustrates a user manually matching the external and parameterized objects, one at a time, according to an embodiment of the invention.
  • FIG. 3 illustrates how engineers from multiple disciplines can work on common or connected data, according to an embodiment of the invention.
  • FIG. 4 depicts a workflow according to an embodiment of the invention.
  • FIG. 5 depicts another workflow according to an embodiment of the invention.
  • FIG. 6 is a block diagram of an exemplary computer system for implementing a method for using image recognition in a multidisciplinary engineering system, according to an embodiment of the invention.
  • Exemplary embodiments of the invention as described herein generally include systems and methods for using image recognition in a multidisciplinary engineering system. Accordingly, while the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
  • image refers to multi-dimensional data composed of discrete image elements (e.g., pixels for 2-dimensional images and voxels for 3- dimensional images).
  • the image may be, for example, a factory computer-aided design (CAD) layout, which may be a 2-dimensional or 3-dimensional pixel image or a 2 or 3-dimensional vector source image.
  • CAD computer-aided design
  • an image can be thought of as a function from R 2 or R 3 to R or R 7 , the methods of the inventions are not limited to such images, and can be applied to images of any dimension, e.g., a 2- dimensional picture or a 3- dimensional volume.
  • the domain of the image is typically a 2- or 3-dimensional rectangular array, wherein each pixel or voxel can be addressed with reference to a set of 2 or 3 mutually orthogonal axes.
  • digital and digitized as used herein will refer to images or volumes, as appropriate, in a digital or digitized format acquired via a digital acquisition system or via conversion from an analog image.
  • a user may pair the visual representation of a graphical library object with another visual representation from an external source, herein referred to as an external representation, and an IR system according to an embodiment of the invention provides assistance in pairing related internal objects.
  • an IR system according to an embodiment of the invention provides assistance in pairing related internal objects.
  • the box labeled "New Module” between the "Library” box and the “Graphics” box represents an integrated layer or an add-on according to an embodiment of the disclosure.
  • a workflow may begin at step 4-1 with a user pairing a graphical object from the Library to an external graphics object. Once this pairing is performed for one external representation, any other external representations identified as the same will inherit some or all of the tags and classifications from the library object at step 4-2.
  • the user may then receive a visual indication that an additional action is available, and may be presented with a pruned collection of external objects that match tag and classification criteria learned from the pairing, from which the user may chose another external representation.
  • the external representation may be in a domain not already contained in the library object. In that case, if desired, the library object can be updated at step 4-4 by adding the external representation as a domain-specific representation.
  • embodiments of the invention can allow a user to perform their normal workflow with existing tools, and at any point switch to and from an accelerated workflow. Embodiments of the invention do not require pre-defined rules or user tags or classification input for the mapping.
  • embodiments of the invention can update a library object to contain an additional discipline context, without any change in the user's prior workflow. The resulting object can be used directly in other disciplines if desired, thus encouraging multidisciplinary collaboration with little overhead.
  • each discipline can have its own internal representation
  • the underlying object that is being matched is often the same real-world, purchasable item.
  • the matching can sometimes be entirely automated.
  • a simplified projected image of a 3-dimentional representation in a library object may be matched to the 2-dimensional input without any explicit user-matching.
  • This automatic matching can reduce overhead for multi-disciplinary collaboration, since some objects can gain data from additional disciplines with little to no user interaction.
  • a 3 dimensional object model can be paired with a 2-dimensional projection without using any manual or implied pairing information, such as might be acquired from a training session.
  • a user can also enter additional details before matching for more exact results, while still spending less time than with current manual or assisted manual pairing methods.
  • a user may change additional settings to improve options presented during assisted manual pairing and automatic pairing results.
  • Some examples are as follows: (1) Thresholds for recognizing similar images, i.e. two objects in an external image with a same figure, but different widths, could be considered the same object. Similar settings can be made for rotation.
  • Tolerances for image recognition facets, i.e. rungs on a ladder or doors in an electrical cabinet may vary within the same classifications. In some cases, the classifications may change based on user-editable ranges for the tolerances.
  • a workflow according to an embodiment of the disclosure may begin at step 5-1 by a line design engineer importing a "carpet," a 2 dimensional sketch of a plant layout in a top view format. Graphical objects or sub-images in the carpet may be delineated and cataloged by an image recognition system according to an embodiment of the invention, and any nearby text may be collected to be associated with these sub-images.
  • this engineer places 3 dimensional objects, possibly specific to the line design domain, on top of individual or compound sub-images.
  • the existing classification may be extracted from the placed object and applied to the covered image, which has been already been delineated and cataloged by an image recognition system according to an embodiment of the invention.
  • the covered image may be added as an additional representation for the library object. If there are more instances of the now classified 2 dimensional image on the same carpet: a user may now click on it at step 5-4a to filter the library, or bring up a dialog box to do the same.
  • the user may be provided with an option to automatically place a new instance of the last placed part on a similarly cataloged part, to select from the library within the classification or within ancestor- classifications, or to place a new item on the last placed part.
  • Step 5-4b when the new item is placed, the classification is now appended to the existing classifications of the 2 dimensional image, if it is different from previous classifications.
  • Steps 5-4a-b may be repeated, depending on user settings, future carpets in the project, in that office, or in that company, thus skipping the manual 3 dimensional object search of step 52.
  • Useful suggestions can be quickly reached when the user or an administrative user manually adds, edits, or deletes 2 dimensional object-to- classification mappings, or applies a mapping file to perform these operations en masse.
  • An automatic process according to an embodiment of the disclosure may replace a manual process that can take significant engineering time to match object domain representations. Different engineering domains may finish their work earlier. For example, a plant engineering project can be finished earlier, which means that the plant can start production earlier.
  • an automatic process is less error prone than an entirely manual pairing step.
  • methods according to embodiments of the disclosure can augment manual pairing and synchronization, or completely automate it once sufficient manual pairing has been performed, or an existing pairing database has been loaded. It is possible to adapt a method according to embodiments of the disclosure to all existing and future multi-disciplinary engineering systems.
  • embodiments of the present disclosure can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof.
  • the present invention can be implemented in software as an application program tangible embodied on a computer readable program storage device.
  • the application program can be uploaded to, and executed by, a machine comprising any suitable architecture.
  • FIG. 6 is a block diagram of an exemplary computer system for implementing a method for using image recognition in a multidisciplinary engineering system, according to an embodiment of the invention.
  • a computer system 61 for implementing the present invention can comprise, inter alia, a central processing unit (CPU) 62, a memory 63 and an input/output (I/O) interface 64.
  • the computer system 61 is generally coupled through the I/O interface 64 to a display 65 and various input devices 66 such as a mouse and a keyboard.
  • the support circuits can include circuits such as cache, power supplies, clock circuits, and a communication bus.
  • the memory 63 can include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof.
  • the present invention can be implemented as a routine 67 that is stored in memory 63 and executed by the CPU 62 to process the signal from the signal source 68.
  • the computer system 61 is a general purpose computer system that becomes a specific purpose computer system when executing the routine 67 of the present invention.
  • the computer system 61 also includes an operating system and micro instruction code.
  • the various processes and functions described herein can either be part of the micro instruction code or part of the application program (or combination thereof) which is executed via the operating system.
  • various other peripheral devices can be connected to the computer platform such as an additional data storage device and a printing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of using image recognition for cross domain data exchange includes receiving (4-1) a set of paired graphical objects, wherein a first graphical object is read from an online library, and a second graphical object is an external graphics object, transferring (4-2) tags and classification associated with the first graphical object onto one or more other external representations of said first graphical object, and presenting (4-3) a user with a collection of external graphical objects that match the tag and classification criteria transferred from the first graphical object. The online library may be updated (4-4) with one or more of said matched external graphical objects, if one or more of said matched external graphical objects are not already contained in said library

Description

MULTI DISCIPLINARY ENGINEERING DESIGN USING IMAGE RECOGNITION
Cross Reference to Related United States Applications
This application claims priority from "Method for Engineering Design using Image Recognition in a Multidisciplinary Engineering System", U.S. Provisional Application No. 61/766,160 of Noetzelmann, et al., filed February 19, 2013, the contents of all of which are herein incorporated by reference in their entireties.
Technical Field
This disclosure is directed to methods for cross-domain data exchange.
Discussion of the Related Art
Communication between disciplines is often still through exchanging primitive graphics, such as schematics, which the receiving discipline will recognize as discipline specific objects or object categories. The receiving parry will use this information to build their own discipline-specific representation. However, this process can be assisted, or even automated completely, through using image recognition (IR) and optical character recognition (OCR) techniques.
Image recognition is commonly tailored to recognizing specific types of objects, such as faces in security software, books and other media in cell phone shopping applications, or text in the case of Optical Character Recognition (OCR). This specialization is traditionally built into the specific image recognition system, or added as a module created a priori.
Some recent approaches combine image type specialization with after- deployment user-added tagging. One example is automatic tagging in online social network systems, where a user can tag a face with a name, and subsequently future images will automatically apply the tag where the face is recognized. Google has also combined automatic tagging with user input in Google Image Labeler, where the user directly inputs tags associated with image content. Traditional engineering disciplines work is separated from a data point of view, and manual synchronization of discipline specific data is time consuming and error prone. A typical synchronization workflow may include importing data, and selecting a library object to replace an imported object based on the imported object's image and possibly tags and classification. The user may rely on the visual representation of the imported external object and the library object to make this match.
FIG. 1 depicts external objects as being represented by outlined shapes, and the parameterized data as solid filled shapes. In FIG. 2, the user manually matches the external and parameterized objects, one at a time.
A multi-disciplinary engineering system is a system that allows engineers from multiple disciplines to work on common or connected data. For example, as illustrated in FIG. 3, a factory planner can work together with management, a mechanical engineer, an electrical engineer and an automation engineer to plan a new production line for a car door assembly.
Traditionally, the disciplines work is separated from a data point of view and manual synchronization of the discipline specific data is time consuming and error prone. For example, when the automation engineer introduces a new programmable logic controller to the project, which he needs to automate the line, this information needs to be transported to the electrical engineer, so he can place it in the right electrical cabinet and plan its wiring. If this information is not transmitted or distorted, it can have serious impact on the quality of the work of the electrical engineer and vice versa.
Data connections between the different disciplines can potentially be used to allow the system to support various functions: (1) Notification/communication between disciplines or departments; (2) Rule based change propagation; (3) Formalization of workflows, such as sign off procedures; (4) Multi-Disciplinary report generation; and (5) Usage of common interdisciplinary data structures. Summary
Exemplary embodiments of the invention as described herein generally include methods for cross domain data exchange, through matching non- parameterized graphics and parameterized data which also contain graphic representations. Exemplary embodiments of the invention use image recognition (IR) and optical character recognition (OCR) techniques to build multi-disciplinary objects from domain specific objects and to update a shared library. Embodiments of the invention can be implemented as an integrated layer or as an add-in, within a multidisciplinary tool or in domain specific tools.
According to an aspect of the invention, there is provided a computer- implemented method of using image recognition for cross domain data exchange, including receiving a set of paired graphical objects, wherein a first graphical object is read from an online library, and a second graphical object is an external graphics object, transferring tags and classification associated with the first graphical object onto one or more other external representations of said irst graphical object, and presenting a user with a collection of external graphical objects that match the tag and classification criteria transferred from the first graphical object.
According to a further aspect of the invention, the method includes updating said online library with one or more of said matched external graphical objects, if one or more of said matched external graphical objects are not already contained in said library.
According to a further aspect of the invention, the method includes updating said online library with an encapsulation of one or more of said matched external graphical objects and the first graphical object within one of a new graphical object in the library, or the original first graphical object as one or more additional representations.
According to a further aspect of the invention, the set of paired graphical objects were paired by a user. According to a further aspect of the invention, the set of paired graphical objects were automatically paired by an image recognition program.
According to a further aspect of the invention, the image recognition program does not use pre-defined rules or user tags to pair said set of paired graphical objects.
According to a further aspect of the invention, pairing said set of paired graphical objects comprises matching a projected image of a 3-dimensional representation of the first graphical object with a 2-dimensional representation of said second graphical object.
According to a further aspect of the invention, the method includes receiving thresholds for recognizing similar objects, and tolerances for image recognition facets.
According to another aspect of the invention, there is provided a computer- implemented method of using image recognition for cross domain data exchange, including importing a 2-dimensional (2D) layout image, recognizing and cataloging objects in said 2D layout image, and collecting text associated with said objects in said layout image, receiving representations of one or more 3-dimensional (3D) objects placed on top of one or more recognized objects in said 2D layout image, wherein said 3D objects were extracted from an online library, extracting classifications associated with said placed 3D objects and applying said classifications to objects in said 2D image that are covered by the 3D objects, and adding the covered object with the applied classifications to the online library.
According to another aspect of the invention, there is provided a non- transitory program storage device readable by a computer, tangibly embodying a program of instructions executed by the computer to perform the method steps for using image recognition for cross domain data exchange
Brief Description of the Drawings
FIG. 1 illustrates the importation of data and selection of a library object to replace an imported object, according to an embodiment of the invention. FIG. 2 illustrates a user manually matching the external and parameterized objects, one at a time, according to an embodiment of the invention.
FIG. 3 illustrates how engineers from multiple disciplines can work on common or connected data, according to an embodiment of the invention.
FIG. 4 depicts a workflow according to an embodiment of the invention.
FIG. 5 depicts another workflow according to an embodiment of the invention.
FIG. 6 is a block diagram of an exemplary computer system for implementing a method for using image recognition in a multidisciplinary engineering system, according to an embodiment of the invention.
Detailed Description of Exemplary Embodiments
Exemplary embodiments of the invention as described herein generally include systems and methods for using image recognition in a multidisciplinary engineering system. Accordingly, while the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.
As used herein, the term "image" refers to multi-dimensional data composed of discrete image elements (e.g., pixels for 2-dimensional images and voxels for 3- dimensional images). The image may be, for example, a factory computer-aided design (CAD) layout, which may be a 2-dimensional or 3-dimensional pixel image or a 2 or 3-dimensional vector source image. Although an image can be thought of as a function from R2 or R3 to R or R7, the methods of the inventions are not limited to such images, and can be applied to images of any dimension, e.g., a 2- dimensional picture or a 3- dimensional volume. For a 2- or 3-dimensional pixel image, the domain of the image is typically a 2- or 3-dimensional rectangular array, wherein each pixel or voxel can be addressed with reference to a set of 2 or 3 mutually orthogonal axes. The terms "digital" and "digitized" as used herein will refer to images or volumes, as appropriate, in a digital or digitized format acquired via a digital acquisition system or via conversion from an analog image.
In a workflow according to an embodiment of the invention, it is assumed that an external graphical input is present, and that there is a library of reusable graphical objects. This library should be multi-disciplinary in nature, potentially holding multiple visual representations and other attributes, including tags and classifications. Each representation and attribute can be marked as belonging to one or more disciplines or shared by all.
In a workflow according to an embodiment of the invention, depicted in FIG. 4, a user may pair the visual representation of a graphical library object with another visual representation from an external source, herein referred to as an external representation, and an IR system according to an embodiment of the invention provides assistance in pairing related internal objects. In FIG. 4, the box labeled "New Module" between the "Library" box and the "Graphics" box represents an integrated layer or an add-on according to an embodiment of the disclosure.
Referring now to FIG. 4, a workflow may begin at step 4-1 with a user pairing a graphical object from the Library to an external graphics object. Once this pairing is performed for one external representation, any other external representations identified as the same will inherit some or all of the tags and classifications from the library object at step 4-2. At step 4-3, the user may then receive a visual indication that an additional action is available, and may be presented with a pruned collection of external objects that match tag and classification criteria learned from the pairing, from which the user may chose another external representation. The external representation may be in a domain not already contained in the library object. In that case, if desired, the library object can be updated at step 4-4 by adding the external representation as a domain-specific representation.
In some domains, the pairing of library objects to imported external representations is a common user workflow. However, embodiments of the invention can allow a user to perform their normal workflow with existing tools, and at any point switch to and from an accelerated workflow. Embodiments of the invention do not require pre-defined rules or user tags or classification input for the mapping. In addition, embodiments of the invention can update a library object to contain an additional discipline context, without any change in the user's prior workflow. The resulting object can be used directly in other disciplines if desired, thus encouraging multidisciplinary collaboration with little overhead.
While each discipline can have its own internal representation, the underlying object that is being matched is often the same real-world, purchasable item. For this reason, according to embodiments of the invention, the matching can sometimes be entirely automated. In a case of a 2-dimentional image as an external representation, a simplified projected image of a 3-dimentional representation in a library object may be matched to the 2-dimensional input without any explicit user-matching. This automatic matching can reduce overhead for multi-disciplinary collaboration, since some objects can gain data from additional disciplines with little to no user interaction. A 3 dimensional object model can be paired with a 2-dimensional projection without using any manual or implied pairing information, such as might be acquired from a training session. A user can also enter additional details before matching for more exact results, while still spending less time than with current manual or assisted manual pairing methods.
According to embodiments of the invention, a user may change additional settings to improve options presented during assisted manual pairing and automatic pairing results. Some examples are as follows: (1) Thresholds for recognizing similar images, i.e. two objects in an external image with a same figure, but different widths, could be considered the same object. Similar settings can be made for rotation. (2) Tolerances for image recognition facets, i.e. rungs on a ladder or doors in an electrical cabinet may vary within the same classifications. In some cases, the classifications may change based on user-editable ranges for the tolerances.
An exemplary workflow according to an embodiment of the invention is illustrated in FIG. 5. A workflow according to an embodiment of the disclosure may begin at step 5-1 by a line design engineer importing a "carpet," a 2 dimensional sketch of a plant layout in a top view format. Graphical objects or sub-images in the carpet may be delineated and cataloged by an image recognition system according to an embodiment of the invention, and any nearby text may be collected to be associated with these sub-images. At step 5-2, this engineer places 3 dimensional objects, possibly specific to the line design domain, on top of individual or compound sub-images. At step 5-3, the existing classification may be extracted from the placed object and applied to the covered image, which has been already been delineated and cataloged by an image recognition system according to an embodiment of the invention. The covered image may be added as an additional representation for the library object. If there are more instances of the now classified 2 dimensional image on the same carpet: a user may now click on it at step 5-4a to filter the library, or bring up a dialog box to do the same. The user may be provided with an option to automatically place a new instance of the last placed part on a similarly cataloged part, to select from the library within the classification or within ancestor- classifications, or to place a new item on the last placed part. At step 5-4b, when the new item is placed, the classification is now appended to the existing classifications of the 2 dimensional image, if it is different from previous classifications. Steps 5-4a-b may be repeated, depending on user settings, future carpets in the project, in that office, or in that company, thus skipping the manual 3 dimensional object search of step 52. Useful suggestions can be quickly reached when the user or an administrative user manually adds, edits, or deletes 2 dimensional object-to- classification mappings, or applies a mapping file to perform these operations en masse.
Utilizing a matching framework according to an embodiment of the disclosure, engineers can save much of the effort of synchronizing data between disciplines through object pairing. An automatic process according to an embodiment of the disclosure may replace a manual process that can take significant engineering time to match object domain representations. Different engineering domains may finish their work earlier. For example, a plant engineering project can be finished earlier, which means that the plant can start production earlier. In addition, an automatic process is less error prone than an entirely manual pairing step. Depending on project and user preferences, methods according to embodiments of the disclosure can augment manual pairing and synchronization, or completely automate it once sufficient manual pairing has been performed, or an existing pairing database has been loaded. It is possible to adapt a method according to embodiments of the disclosure to all existing and future multi-disciplinary engineering systems. It is to be understood that embodiments of the present disclosure can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof. In one embodiment, the present invention can be implemented in software as an application program tangible embodied on a computer readable program storage device. The application program can be uploaded to, and executed by, a machine comprising any suitable architecture.
FIG. 6 is a block diagram of an exemplary computer system for implementing a method for using image recognition in a multidisciplinary engineering system, according to an embodiment of the invention. Referring now to FIG. 6, a computer system 61 for implementing the present invention can comprise, inter alia, a central processing unit (CPU) 62, a memory 63 and an input/output (I/O) interface 64. The computer system 61 is generally coupled through the I/O interface 64 to a display 65 and various input devices 66 such as a mouse and a keyboard. The support circuits can include circuits such as cache, power supplies, clock circuits, and a communication bus. The memory 63 can include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof. The present invention can be implemented as a routine 67 that is stored in memory 63 and executed by the CPU 62 to process the signal from the signal source 68. As such, the computer system 61 is a general purpose computer system that becomes a specific purpose computer system when executing the routine 67 of the present invention.
The computer system 61 also includes an operating system and micro instruction code. The various processes and functions described herein can either be part of the micro instruction code or part of the application program (or combination thereof) which is executed via the operating system. In addition, various other peripheral devices can be connected to the computer platform such as an additional data storage device and a printing device.
It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures can be implemented in software, the actual connections between the systems components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings of the present invention provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.
While the present invention has been described in detail with reference to exemplary embodiments, those skilled in the art will appreciate that various modifications and substitutions can be made thereto without departing from the spirit and scope of the invention as set forth in the appended claims.

Claims

CLAIMS What is claimed is:
1. A computer-implemented method of using image recognition for cross domain data exchange, the method executed by the computer comprising the steps of: receiving a set of paired graphical objects, wherein a first graphical object is read from an online library, and a second graphical object is an external graphics object;
transferring tags and classification associated with the first graphical object onto one or more other external representations of said first graphical object; and presenting a user with a collection of external graphical objects that match the tag and classification criteria transferred from the first graphical object.
2. The method of claim 1, further comprising updating said online library with one or more of said matched external graphical objects, if one or more of said matched external graphical objects are not already contained in said library.
3. The method of claim 1 , further comprising updating said online library with an encapsulation of one or more of said matched external graphical objects and the first graphical object within one of a new graphical object in the library, or the original first graphical object as one or more additional representations.
4. The method of claim 1 , wherein the set of paired graphical objects were paired by a user.
5. The method of claim 1 , wherein the set of paired graphical objects were automatically paired by an image recognition program.
6. The method of claim 5, wherein said image recognition program does not use pre-defined rules or user tags to pair said set of paired graphical objects.
7. The method of claim 5, wherein pairing said set of paired graphical objects comprises matching a projected image of a 3-dimensional representation of the first graphical object with a 2-dimensional representation of said second graphical object.
8. The method of claim 1 , further comprising receiving thresholds for recognizing similar objects, and tolerances for image recognition facets.
9. A computer-implemented method of using image recognition for cross domain data exchange, the method executed by the computer comprising the steps of: importing a 2-dimensional (2D) layout image;
recognizing and cataloging objects in said 2D layout image, and collecting text associated with said objects in said layout image;
receiving representations of one or more 3-dimensional (3D) objects placed on top of one or more recognized objects in said 2D layout image, wherein said 3D objects were extracted from an online library;
extracting classifications associated with said placed 3D objects and applying said classifications to objects in said 2D image that are covered by the 3D objects; and adding the covered object with the applied classifications to the online library.
10. A non-transitory program storage device readable by a computer, tangibly embodying a program of instructions executed by the computer to perform the method steps for using image recognition for cross domain data exchange, the method comprising the steps of:
receiving a set of paired graphical objects, wherein a first graphical object is read from an online library, and a second graphical object is an external graphics object;
transferring tags and classification associated with the first graphical object onto one or more other external representations of said first graphical object; and presenting a user with a collection of external graphical objects that match the tag and classification criteria transferred from the first graphical object.
1 1. The computer readable program storage device of claim 10, the method further comprising updating said online library with one or more of said matched external graphical objects, if one or more of said matched external graphical objects are not already contained in said library.
12. The computer readable program storage device of claim 10, the method further comprising updating said online library with an encapsulation of one or more of said matched external graphical objects and the first graphical object within one of a new graphical object in the library, or the original first graphical object as one or more additional representations.
13. The computer readable program storage device of claim 10, wherein the set of paired graphical objects were paired by a user.
14. The computer readable program storage device of claim 10, wherein the set of paired graphical objects were automatically paired by an image recognition program.
15. The computer readable program storage device of claim 14, wherein said image recognition program does not use pre-defined rules or user tags to pair said set of paired graphical objects.
16. The computer readable program storage device of claim 14, wherein pairing said set of paired graphical objects comprises matching a projected image of a 3-dimensional representation of the first graphical object with a 2-dimensional representation of said second graphical object.
17. The computer readable program storage device of claim 10, the method further comprising receiving thresholds for recognizing similar objects, and tolerances for image recognition facets.
PCT/US2014/016799 2013-02-19 2014-02-18 Multi disciplinary engineering design using image recognition WO2014130417A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361766160P 2013-02-19 2013-02-19
US61/766,160 2013-02-19

Publications (1)

Publication Number Publication Date
WO2014130417A1 true WO2014130417A1 (en) 2014-08-28

Family

ID=50277304

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/016799 WO2014130417A1 (en) 2013-02-19 2014-02-18 Multi disciplinary engineering design using image recognition

Country Status (1)

Country Link
WO (1) WO2014130417A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486710A (en) * 2020-12-17 2021-03-12 夏红梅 Information acquisition method based on big data and artificial intelligence and digital content service platform
CN113535951A (en) * 2021-06-21 2021-10-22 深圳大学 Method, device, terminal equipment and storage medium for information classification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005114496A1 (en) * 2004-04-21 2005-12-01 Arcway Ag Method and device for designing technical devices and systems
US20060114252A1 (en) * 2004-11-29 2006-06-01 Karthik Ramani Methods for retrieving shapes and drawings
US20110022613A1 (en) * 2008-01-31 2011-01-27 Siemens Ag Method and System for Qualifying CAD Objects
US20110055150A1 (en) * 2009-08-31 2011-03-03 Boehm Birthe Method for computer assisted planning of a technical system
WO2011023239A1 (en) * 2009-08-31 2011-03-03 Siemens Aktiengesellschaft Workflow centered mechatronic objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005114496A1 (en) * 2004-04-21 2005-12-01 Arcway Ag Method and device for designing technical devices and systems
US20060114252A1 (en) * 2004-11-29 2006-06-01 Karthik Ramani Methods for retrieving shapes and drawings
US20110022613A1 (en) * 2008-01-31 2011-01-27 Siemens Ag Method and System for Qualifying CAD Objects
US20110055150A1 (en) * 2009-08-31 2011-03-03 Boehm Birthe Method for computer assisted planning of a technical system
WO2011023239A1 (en) * 2009-08-31 2011-03-03 Siemens Aktiengesellschaft Workflow centered mechatronic objects

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A-P TA ET AL: "3D Object Detection and Viewpoint Selection in Sketch Images Using Local Patch-Based Zernike Moments", CONTENT-BASED MULTIMEDIA INDEXING, 2009. CBMI '09. SEVENTH INTERNATIONAL WORKSHOP ON, IEEE, PISCATAWAY, NJ, USA, 3 June 2009 (2009-06-03), pages 189 - 194, XP031481685, ISBN: 978-1-4244-4265-2 *
VIRGILIO QUINTANA ET AL: "Will Model-based Definition replace engineering drawings throughout the product lifecycle? A global perspective from aerospace industry", COMPUTERS IN INDUSTRY, vol. 61, no. 5, June 2010 (2010-06-01), pages 497 - 508, XP055111599, ISSN: 0166-3615, DOI: 10.1016/j.compind.2010.01.005 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486710A (en) * 2020-12-17 2021-03-12 夏红梅 Information acquisition method based on big data and artificial intelligence and digital content service platform
CN113535951A (en) * 2021-06-21 2021-10-22 深圳大学 Method, device, terminal equipment and storage medium for information classification
CN113535951B (en) * 2021-06-21 2023-02-17 深圳大学 Method, device, terminal equipment and storage medium for information classification

Similar Documents

Publication Publication Date Title
CN102687150B (en) Composite information display for a part
US10950021B2 (en) AI-driven design platform
Jung et al. A reference activity model for smart factory design and improvement
US10289263B2 (en) Data acquisition and encoding process linking physical objects with virtual data for manufacturing, inspection, maintenance and repair
Cheng et al. A functional feature modeling method
US10984317B2 (en) Dataset for learning a function taking images as inputs
EP2892028A2 (en) Updating of 3D CAD models based on augmented-reality
EP3451206B1 (en) Method, apparatus, and device for generating a visual model layout of a space
Cao et al. Digital Twin–oriented real-time cutting simulation for intelligent computer numerical control machining
US10534865B2 (en) Flexible CAD format
CA2667334C (en) Method and devices for aiding in the modeling of 3d objects
CN102812463A (en) Method And System Enabling 3D Printing Of Three-dimensional Object Models
US10417924B2 (en) Visual work instructions for assembling product
Xiao et al. Mobile 3D assembly process information construction and transfer to the assembly station of complex products
Jayasena et al. Building information modelling for Sri Lankan construction industry
WO2014026021A1 (en) Systems and methods for image-based searching
US11373022B2 (en) Designing a structural product
CN103136791A (en) Data association method and data association device used for airplane digitalization maintenance and application
Ali et al. Heritage Building Preservation Through Building Information Modelling: Reviving Cultural Values Through Level of Development Exploration
CN109658499B (en) Model establishing method and device and storage medium
WO2014130417A1 (en) Multi disciplinary engineering design using image recognition
Sommer et al. Automated generation of a digital twin of a manufacturing system by using scan and convolutional neural networks
Rogage et al. 3D object recognition using deep learning for automatically generating semantic BIM data
Pippenger Three-dimensional model for manufacturing and inspection
EP4046004A1 (en) Generating a 3d model of a plant layout

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14710089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14710089

Country of ref document: EP

Kind code of ref document: A1