CN107924268B - Object selection system and method - Google Patents

Object selection system and method Download PDF

Info

Publication number
CN107924268B
CN107924268B CN201680050238.2A CN201680050238A CN107924268B CN 107924268 B CN107924268 B CN 107924268B CN 201680050238 A CN201680050238 A CN 201680050238A CN 107924268 B CN107924268 B CN 107924268B
Authority
CN
China
Prior art keywords
objects
appearance
change
path
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680050238.2A
Other languages
Chinese (zh)
Other versions
CN107924268A (en
Inventor
迪克·巴德塞
迈克尔·内尔松
詹姆斯·卡林顿
提摩西·A·克尔克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SIEMENS INDUSTRY SOFTWARE Ltd
Original Assignee
SIEMENS INDUSTRY SOFTWARE Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SIEMENS INDUSTRY SOFTWARE Ltd filed Critical SIEMENS INDUSTRY SOFTWARE Ltd
Publication of CN107924268A publication Critical patent/CN107924268A/en
Application granted granted Critical
Publication of CN107924268B publication Critical patent/CN107924268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Systems (100) and methods are provided that facilitate selection of individually selectable three-dimensional objects displayed via a display device (108). The system may include at least one processor (102) configured to determine at least one path (128) through the object (112, 114, 116) based on at least one motion input (130) received through the input device (110). The processor may also cause at least two of the objects (112, 114) to be selected in a group while at least one of the objects (116) remains unselected based on an amount of surface area of each object traversed by the at least one path. Additionally, the processor may cause, in response to at least one operation input (132) received through the input device, at least one operation to be performed on the selected group of at least two objects based on whether the object is selected, and not perform the at least one operation on the at least one object that remains unselected.

Description

Object selection system and method
Technical Field
The present disclosure relates generally to computer-aided design (CAD), visualization and manufacturing systems, Product Data Management (PDM) systems, Product Lifecycle Management (PLM) systems, and similar systems for creating and managing data for products and other projects (collectively referred to herein as product systems).
Background
PLM systems may include components that facilitate the design of product structures. Such components may benefit from improvements.
Disclosure of Invention
Various disclosed embodiments include systems and methods that may be used to facilitate selection of an object. In one example, a system may include at least one processor. The processor may be configured to cause the display device to display a plurality of individually selectable three-dimensional objects. Further, the at least one processor may be configured to determine at least one path through the object based on at least one motion input received through operation of the at least one input device. Additionally, the at least one processor may be configured to cause at least two of the objects to be selected in the group while at least one of the objects remains unselected based on an amount of surface area of each object traversed by the at least one path. Further, the at least one processor may be configured to: in response to at least one operation input received through the at least one input device, causing at least one operation of the plurality of operations to be performed on the selected group of the at least two objects and causing at least one operation of the plurality of operations not to be performed on the at least one object that remains unselected, based on whether the object is selected.
In another example, a method may include various acts performed by operation of at least one processor. Such a method may include: causing a display device to display a plurality of individually selectable three-dimensional objects; determining at least one path through the object based on at least one motion input received through operation of at least one input device; causing at least two of the objects to be selected in a group while at least one of the objects remains unselected based on an amount of surface area of each object traversed by the at least one path; and in response to at least one operation input received through the at least one input device, causing at least one of the plurality of operations to be performed on the selected group of at least two objects and at least one of the plurality of operations to be not performed on at least one object that remains unselected, based on whether the object is selected.
Yet another example may include a non-transitory computer-readable medium encoded with executable instructions (e.g., software components on a storage device) that, when executed, cause at least one processor to perform the method described.
The foregoing has outlined rather broadly the technical features of the present disclosure so that those skilled in the art may better understand the following embodiments. Additional features and advantages of the disclosure will be described hereinafter which form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiment disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form.
Before proceeding with the following embodiments, it may be advantageous to set forth definitions of certain words and phrases that may be used throughout this patent document. For example, the terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation. The singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further, as used herein, the term "and/or" refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term "or" is inclusive, meaning and/or, unless the context clearly dictates otherwise. The phrases "associated with …" and "associated therewith," as well as derivatives thereof, may mean including, included within, interconnected with, contained within …, connected to or with …, coupled to or with.
Furthermore, although the terms "first," "second," "third," etc. may be used herein to describe various elements, functions or acts, these elements, functions or acts should not be limited by these terms. Rather, these numerical adjectives are used to distinguish between different elements, functions or acts. For example, a first element, function, or action may be termed a second element, function, or action, and, similarly, a second element, function, or action may be termed a first element, function, or action, without departing from the scope of the present disclosure.
Additionally, phrases such as "a processor is configured to" implement one or more functions or processes may mean that the processor is operatively configured or operatively configured to implement the functions or processes via software, firmware, and/or wired circuitry. For example, a processor configured to implement a function/process may correspond to a processor actively executing software/firmware programmed to cause the processor to implement the function/process, and/or may correspond to a processor causing software/firmware in an available memory or storage device to be executed by the processor to implement the function/process. It should also be noted that a processor "configured to" implement one or more functions or processes may also correspond to a processor circuit (e.g., an ASIC or FPGA design) that is specifically manufactured or "wired" to implement the functions or processes. Further, the phrase "at least one" preceding an element (e.g., a processor) configured to implement more than one function may correspond to one or more elements (e.g., processors) each performing a function, and may also correspond to two or more elements (e.g., processors) respectively implementing different ones of the one or more different functions.
The term "and.. adjacent" can mean: an element is relatively close to but not in contact with another element; or the element is in contact with other parts.
Definitions for certain words and phrases are provided throughout this patent document, and those of ordinary skill in the art will understand that such definitions apply to many, if not most, instances of prior as well as future uses of such defined words and phrases. Although some terms may include a wide variety of embodiments, the appended claims may expressly limit these terms to particular embodiments.
Drawings
FIG. 1 illustrates a functional block diagram of an example system that facilitates selection of an object.
Fig. 2-9 illustrate example visual outputs from a graphical user interface including selectable and selected three-dimensional objects of a structure.
FIG. 10 illustrates a flow diagram of an example method that facilitates selection of an object.
FIG. 11 is a block diagram of a data processing system in which embodiments may be implemented.
Detailed Description
Various technologies pertaining to systems and methods for selecting objects will now be described with reference to the drawings, where like reference numerals represent like elements throughout. The drawings discussed below and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged device. It should be understood that functions described as being performed by certain system elements may be performed by multiple elements. Similarly, for example, an element may be configured to perform a function described as being performed by multiple elements. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.
Many forms of drawing systems (e.g., CAD software) may operate to manipulate various types of three-dimensional objects including one or more structures. Such objects may include manufactured objects, such as components, assemblies, and sub-assemblies used to construct structures. By way of example, the wagon structure may comprise a base on which several components are mounted. Such components may include a handle and four wheels mounted via brackets, shafts, bearings, and fasteners (e.g., bolts, washers, and nuts). All of these components correspond to 3-D components that can be drawn and manipulated by CAD software or other drawing systems. Further, it should be appreciated that the drawing system may also be capable of drawing and manipulating more general 3-D objects, such as geometric 3-D shapes including squares, prisms, spheres, cones, cylinders, cubes, and/or cuboids.
Thus, in general, a 3-D object may correspond to any type of 3-D object capable of being displayed by a display device (e.g., a display screen) that is capable of being manipulated via input through an input device with respect to the shape, size, orientation, position, visibility, transparency, color, physical properties, annotations, and/or any other characteristic of the object.
Referring to FIG. 1, an example system 100 that facilitates drawing and manipulating objects is illustrated. The system 100 may include at least one processor 102 configured to execute one or more application software components 104 from a memory 106 to implement various features described herein. The application software component 104 may include a drawing software application or a portion thereof, such as a CAD software application. Such CAD software applications can operate to generate and edit CAD drawings based at least in part on input provided by a user.
Examples of CAD/CAM/CAE (computer aided design/computer aided manufacturing/computer aided engineering) software suitable for including at least some of the functions described herein include the NX application suite available from siemens products lifecycle management software, inc (pleinuo, texas). However, it should also be understood that such drawing software applications may correspond to other types of drawing software, including building software and/or any other type of software that involves the drawing and manipulation of 3-D objects of a structure.
The described system may include at least one display device 108 (e.g., a display screen) and at least one input device 110. For example, the processor may be included as part of a PC, laptop, workstation, server, tablet, mobile phone, or any other type of computing system. The display device may include, for example, an LCD display, a monitor, and/or a projector. The input devices may include, for example, a mouse, a pointer, a touch screen, a touch pad, a drawing pad, a trackball, a button, a keypad, a keyboard, a game controller, a camera, a motion sensing device that captures motion gestures, or any other type of input device capable of providing the inputs described herein. Further, for devices such as tablet computers, the processor 102 may be integrated into a housing that includes a touch screen as both an input device and a display device. Further, it should be appreciated that some input devices (e.g., game controllers) may include a variety of different types of input devices (analog joysticks, d-pads, and buttons).
Further, it should be noted that the processor described herein may be located in a server remote from the display device and input device described herein. In such examples, the described display device and input device may be included in a client device that communicates with a server (and/or a virtual machine executing on the server) over a wired or wireless network (which may include the internet). In some implementations, such a client device may execute, for example, a remote desktop application or may correspond to a portal device that executes a remote desktop protocol with a server to send input from an input device to the server and receive visual information from the server for display by a display device. Examples of such remote desktop protocols include the PCoIP of teratici, the RDP and RFB protocols of Microsoft. In such examples, the processors described herein may correspond to virtual processors of a virtual machine executing in a physical processor of a server.
Fig. 1 schematically shows a plurality of different views (a to C) of a display device 108, the display device 108 being displayed by the processor 102 in response to various inputs received through an input device 110. For example, in view a of the display device 108, the processor 102 may be configured to cause the display device 108 to display the structure 118 in the workspace 120 (e.g., via an application software component).
The workspace 120 may correspond to a three-dimensional space in which objects are visually drawn, displayed, and manipulated using a Graphical User Interface (GUI)124 of the application software component 104 to create at least one desired structure 118. In an example implementation, the display device 108 may correspond to a two-dimensional (2D) display screen through which different views of a three-dimensional (3D) workspace and structure 118 may be viewed. Additionally, in other examples, a 3D display may be used to display a 3D structure in a 3D workspace.
The example shown in fig. 1 depicts a generic structure 118, which generic structure 118 comprises two smaller objects 112, 114 connected to (and/or adjacent to) a larger object 116. Such objects are depicted as block-type components. It should be appreciated, however, that these objects are intended to generally illustrate the features of any type of component that may be applied to a structure and/or separate structures.
Objects may be drawn and/or edited in workspace 120 in response to drawing input 122 received through input device 110. Data representing the rendered object may be stored in memory 106 as object data 126. However, it should also be appreciated that such object data 126 may be retrieved from the CAD file and/or the data store via input through the input device 110, and the processor may be configured to display the object through the display device in response to the loaded object data.
The GUI 124 may include functionality that enables a user to individually select one or more of the objects 112, 114, 116 of the structure 118 via input through the input device 110. For example, the GUI may enable a user to select a single object by mouse clicking and select additional objects by clicking the mouse while holding down control keys of the keyboard. Further, the GUI may enable selection of one or more objects by using a rectangular selection box, wherein a mouse is used to draw a box on an object and the object under the box becomes selected. However, it should be appreciated that for a large number of small objects, individually selecting objects by individually clicking on the objects may be slow and tedious. Furthermore, a rectangular selection box (although perhaps faster) may select objects that are not desired to be selected.
To provide greater granularity to the selection process in a faster manner than a single click of each desired object to be selected, example embodiments may enable a user to quickly draw one or more paths 128 on the object to be selected in response to one or more motion inputs 130 through the input device 110. To visualize the path for the user, the described processor may be operative to change an appearance of a portion of each particular object traversed by at least one path determined by the processor based on at least one motion input.
For an input device such as a mouse, such motion input may be generated by a mouse-type input device moving a mouse pointer across a display screen and along the following areas while mouse buttons are pressed: the desired path is located in the region. Similar to a touch screen type input device, a user may slide a finger along the surface of the touch screen over an area where a desired path is located. However, the input device for providing motion input is not limited to a mouse or a touch screen, but may correspond to any type of input device capable of providing the following data: the data represents a path through one or more objects desired to be selected.
View B of fig. 1 shows an example of such a path 128 through objects 112, 114, and 116. In this example, the path has the form of a stripe across the structure 118, which covers the surface of the structure. Here, the stripes are shown as having a black color that overlays/replaces the previously shown white surface of the object shown in view a of fig. 1. It should be appreciated that example embodiments may produce the change in appearance by one or more different color changes, texture changes, brightness changes, line patterns, or any other visual change in the surface appearance of an object that can be perceived by an end user of the system.
However, in the described embodiment, rather than selecting each object traversed by the path (i.e. intersected and/or overlaid), the processor may be configured to select only those objects traversed by the path for which a predetermined portion (fraction) of their respective surface area is traversed. In other words, based on the amount of surface area of each object traversed by the at least one path, the processor may cause one or more objects having a change in appearance to be selected while one or more objects having a change in appearance may remain unselected.
In this example, a majority of the visible surface of the smaller objects 112, 114 is covered by the path 128, while only a small portion (e.g., less than 15%) of the visible surface of the larger object 116 is covered by the path 128. Thus, in this example, the processor may be configured to select only the smaller objects 112, 114 and not the larger object 116 based on the amount of surface area of each part traversed by the path (i.e., covered).
In an example embodiment, the path may be determined from motion input (e.g., changes in coordinates of the pointer 134) by determining the position of the mouse pointer 134 relative to the position of the object, and based on these relative positions, determine how the appearance of the surface of the object should change. In other words, the determined path may correspond to a location of the object at which the object should undergo a change in appearance based on the motion input. Then, based on the path (i.e., the determined locations of the objects to be changed), the processor may then cause the appearance of the objects at those locations to change accordingly (e.g., color change).
It should be appreciated that the width of the path may be a user configurable parameter. Further, in some examples, the size of the pointer 134 may vary to correspond to, and visually represent, the particular path width selected by the user for drawing the path. It should also be appreciated that, in an example embodiment, the surface region undergoing the change in appearance may be matched to the location of the path determined by the processor based on the motion input. In other words, throughout the location area where the processor determines the path, the processor may cause the portion of the object at the location area of the determined path to have a change in appearance.
However, in an example embodiment, it should be appreciated that the actual change in appearance of the object visible to the user through the display device may only be proximate to the location area of the determined path. For example, the surface area of the color change may be larger or smaller than the determined path by a different amount, but still enable the user to perceive the approximate location and size of the path relative to each object. For example, how to perceive the change between changes in the appearance of an object may vary according to the path based on the resolution of the display device compared to the size of the object being displayed. In this example, small objects, such as objects 112, 114, may be displayed by the display device with fewer pixels than larger object 116, and thus may not approximate the position of the path as accurately as larger objects.
Further, it should be appreciated that the path (and/or the appearance change corresponding to the path) may not be solid. For example, as shown in view B of fig. 1, the path and/or the change in appearance corresponding to the path may be blurred, sparse, or speckled. In other examples, the path and/or the change in appearance corresponding to the path may be comprised of dots, holes, shading, stripes, dashed lines, broken lines, or other patterns. Thus, it may be determined that paths with no holes or relatively few holes traverse more surface area of the object than paths with holes or relatively many holes. However, in other embodiments, the presence of such holes and/or sparse/fuzzy regions may correspond only to a visual representation of the location of the path, and may not reduce the amount of surface area determined to be traversed by the path.
It should be appreciated that when a 3D object is displayed on a 2D display screen, even though a portion of the object may be partially occluded (i.e., partially covered/blocked by other objects), the object will still include a front facing surface area that is fully visible when displayed by the display device, while not being occluded by one or more other objects. Thus, for view a of fig. 1, the front facing surface areas of the larger object 116 include the areas under the smaller objects 112, 114, even though these areas are currently occluded by the smaller objects.
In an example embodiment, the processor may be configured to determine an amount of front facing surface area of each object. The processor may be further configured to: based on the determined portion (e.g., percentage) of the amount of the front-facing surface region of each object traversed by the at least one path, a portion of the objects having a change in appearance is selected while at least some of the objects having a change in appearance remain unselected. Thus, in view B of fig. 1, the portion of the surface area of the path 128 through the larger object 116 is relative to the entire front-facing surface of the object 116 (including the visible surface and the surface hidden under the objects 112, 116).
Further, the processor may be operative to determine a threshold amount corresponding to a fractional amount (e.g., a threshold percentage) of the front facing surface region. Such a threshold amount may correspond to a predetermined amount that is configurable by a user through the GUI. The processor may then determine that the following objects are selected: the object has a visible portion equal to or greater than a threshold amount of the determined amount of the front facing surface area traversed by the at least one path. Referring to view B of fig. 1, the visible portion of the determined amount of the front-facing surface area traversed by the at least one path of the larger object 116 corresponds to the amount of the visible surface area of the object 116 with the directly changing appearance (e.g., the portion of the path 128 that does not cover the smaller object 112, 114) divided by the front-facing surface area of the object that is likely to be visible when not covered by the smaller object 112, 114.
In some implementations, the user can set the threshold such that more than 50% or more of the front facing surface area of the object needs to be traversed directly by at least one path before the processor causes the object to be selected. While other users may prefer to set the threshold below 50% (e.g., 33%) so that relatively few front-facing surface areas on the object need to be covered in order to select the object.
To provide visual feedback on how the characteristics of the motion input affect the change in appearance of the object, example embodiments of the processor may be configured to visually display the change in appearance as a painting process in which paint is deposited on the front surface of the object with the visible side facing forward in response to the motion input. For example, the visual form of the painting process may approximate a spray process (e.g., a brush) in which a virtual paint beam is sprayed onto the object based on the motion input. However, in other embodiments, the visual form of the paint processor may approximate a paint brush, paint roller, or marker in which paint is distributed over the object based on the motion input.
The path 128 depicted in view B of fig. 1 depicts an example of a change in appearance of the spray coating process, having the previously described spot form along the edge of the path 128. In other words, the appearance change may not be a solid uniform color change, but may include a single small spot area of color change surrounded by an area of color unchanged. Thus, the path may comprise holes or sparse/shallower regions that make portions of the underlying surface visible between the spots of coating.
In an embodiment of a model spray process, the user may be enabled to control how much of the surface area of the object is covered or uncovered at different portions of the path. For example, for relatively slow motion inputs (i.e., the speed of motion along the path), the amount of surface area with appearance changes is large (e.g., fewer holes in the painted surface of the object and/or a wider distribution of paint). Thus, rather than having a highly speckled path, the virtual paint for the path may be denser/solid and/or wider due to the slower speed of movement resulting in more speckles per unit area. Whereas for relatively faster motion inputs, a smaller amount of surface area may change in appearance. Thus, rather than having a denser/solid coating for the path, the coating for the path may be highly speckled and/or less wide due to the rapidity of movement resulting in less speckling of the coating per unit area.
In addition, the rate at which the appearance of the surface region changes for a given speed of motion input may be controlled via one or more configurable attributes controlled by the configuration of the GUI. For example, the sensitivity profile may be configured to change the rate at which a given area changes appearance (surface area per unit time). Thus, a high sensitivity (high rate) may make the path denser/solid in a given amount of time, while a lower sensitivity (lower rate) may make the path more speckled and/or less wide in the same given amount of time. Further, for example, the distribution size attribute may be configured to set the area/size of the paint application (i.e., the size of the virtual paint brush or the diameter of the spray paint output). Thus, a large distribution size may produce relatively wide fringes in response to motion input, while a relatively smaller distribution size may produce relatively narrow fringes.
In an example embodiment that mimics a spray coating process, the path may be determined by the processor by calculating how the 3D surface area of the object will change based on the calculated direction and position of the coating beam emitted from a virtual coating nozzle that is positioned and moved in response to a motion input (e.g., the position of a mouse pointer). Visual representations or animations of such virtual paint nozzles and/or paint beams may or may not be displayed by the display device. For example, the processor may be configured to display only a movable pointer 134, such as a generic circle as shown in view B, or other shape (e.g., arc, point, nozzle icon) representing the current location (and possibly diameter/width) of the virtual paint nozzle. When a command to start painting is received (e.g., by a mouse click), the processor may be configured to immediately determine a path that the paint beam will impact a surface of the object in the 3D workspace, and then change the appearance of the surface corresponding to the determined path in real-time to display the corresponding painted surface on the object.
In the previously described example of emulating a virtual paint, to determine the position of the path, the virtual paint may extend in 3D in a form that follows the 3D contour of the object covered by the virtual paint. Then, in determining whether to select the object, a visible portion of the forward facing 3D surface area of the object coated by the paint path is estimated.
However, it should be appreciated that in other embodiments, the path may be determined by other methods and may have a different appearance. For example, rather than having the processor emulate how a spray beam of paint will change the appearance of the surface of an object in the 3D workspace, the processor may emulate a paint brush or marker pen that is used to create a 2D path across the surface of the display device, the 2D path passing through the 3D surface of the object. In this example, the 2D path corresponds to the upper plane or layer. Then, in determining whether to select an object, the portion of the forward facing 3D surface area of the object that is directly below the 2D path (from a position that is advantageous to a user viewing the display screen) is evaluated.
In this example, the processor may be configured to alter the 3D surface of the object directly below the determined 2D path to have a change in appearance. Thus, in these described virtual paintings embodiments, a surface with appearance changes can be viewed from different angles as the object or workspace is rotated. Additional paths may be drawn on the structure after the structure is rotated to select other objects that may not be sufficiently visible from the original orientation of the structure to select. In addition to rotation, the GUI may enable a user to pan and zoom in and out of the workspace to place objects at locations and desired sizes that may be selected via one or more drawing paths.
The particular type of appearance change displayed on the particular surface of the object traversed by the determined path may vary depending on the visible portion of the front facing surface region of the object whose appearance changes. In an example embodiment, the configured threshold surface area portion may trigger when such a paint color change occurs. Thus, using the motion input to direct one or more paths to cover an increased percentage of objects, the color of the virtual paint on a particular object may change color (e.g., from blue to red) to indicate to the user that the portion of the front-facing surface area covered by the paint has exceeded the predetermined selection threshold, and thus, the object is now determined by the processor to be the selected object.
However, alternatively or additionally, when the object is determined to be selected based on the visible portion of the front facing surface region having an appearance change, the processor may be configured to cause the object to produce the appearance change in addition to or in lieu of: a change in appearance that occurs while the path is being drawn before a threshold is reached that is needed to cause the object to be selected. For example, as shown in views B and C of FIG. 1, the path 128 causes the outline of the smaller objects 112, 114 to become relatively thicker (compared to view A), which instructs the processor to determine that these objects are selected in response to the path 128 being drawn. It should also be noted that since the processor does not determine that the larger object 116 is selected after drawing the path 128, the outline of the larger object 116 continues to have a relatively thin outline in all views a, B, and C.
In the previous examples, the appearance change has been described as simulating a virtual painting or marking process. However, it should be appreciated that in alternative embodiments, other forms of appearance change may be made based on the determined path corresponding to the user's motion input. For example, the application software component may generate an appearance change that simulates a combustion surface (e.g., simulates the effect of a torch, laser, arc, or other combustion process at a location traversed by at least one path).
In another example, the application software component may generate appearance changes that simulate adding material to the surface, such as adding tiles, cloth, blankets, wallpaper, or other material on the surface traversed by the at least one path. In yet another embodiment, the application software may generate appearance changes based on a user-selectable image or other graphic (e.g., a skin or textured image file) that is mapped to a portion of the surface traversed by the at least one path.
In another example, the change in appearance may correspond to a change in the virtual structure of the object. For example, such structural changes may include replacing the penetrated portion of the solid opaque surface with a transparent or translucent surface, or replacing the penetrated portion of the solid surface with a perforated surface, a wire frame structure, or a mesh structure.
It should be appreciated that the described change in appearance may correspond to any change in appearance (or combination of changes in appearance). Thus, the change in appearance of the object traversed by the at least one path is not limited to any particular example described herein.
Further, it should be appreciated that in some embodiments, the application software component may not initially produce a change in appearance other than displaying a pointer over the surface of the object. Instead, the application software component may save the trajectory of the surface area traversed by the at least one path in memory until a threshold is reached that causes the object to be selected. Then, after selection, the application software component may generate an appearance change that visually highlights which objects have become selected.
In this example, the application software component may provide the user with the ability to turn off and on features for displaying changes in the appearance of the portion of the object traversed by the path. In some cases (e.g., a remote desktop), it may be desirable to turn off the display of appearance changes to increase the frame rate at which the GUI is updated and output through the display device.
Furthermore, in another embodiment, the application software component may only generate temporary appearance changes, for example simulating movement of a flashlight along at least one path and temporarily illuminating portions of the surface of the object traversed by the at least one path. Similarly, the application software component may simulate the temporary heating of the metal using a heat source (e.g., torch, laser), which causes a surface color change near the luminescent metal that fades back to its original color after one or more seconds. As in the example where no appearance change is made, the application software component may save in memory the trajectories of the surface areas traversed by the at least one path (even after they fade back to their original color) until a threshold is reached that causes the objects to be selected.
In an example embodiment, the GUI of the application software component is capable of performing a plurality of operations on a selected set of objects selected based on an amount (e.g., a portion) of the visible front-facing surface that changes in appearance based on being traversed by at least one path. Such operations may include deleting the selected object, hiding the selected object, copying the selected object, and moving the selected object relative to the workspace in response to at least one operational input 132 through the at least one input device 110. Further, it should be noted that such operations may include any type of operation applicable to the object, and may include, for example, displaying, determining, and/or changing attributes (e.g., type of volume, mass, or material) and/or metadata associated with the selected object.
For example, view C of fig. 1 shows an example of providing an operation input corresponding to a moving object by an operation of the input device 110. Such an operation input may correspond to operating a mouse to drag the selected objects 112, 114 as a group from the position shown in view a and view B to the position shown in view C with respect to the workspace 120 and the larger object 116.
As used herein, an operation performed on a selected set of objects corresponds to a user directing that a particular operation be performed with which all objects in the set are affected. Although more than one operation may be performed, it should be recognized that each operation affects all objects in the group.
To illustrate these example embodiments in more detail, fig. 2-9 show examples of display output from a GUI for CAD software that depicts selection of an object by drawing one or more paths across the object. For example, FIG. 2 shows an example display output 200 of a structure 202, the structure 202 having three larger components 204, 206, 208, a set of rectangular blocks 210 arranged in rows, and a set of cube blocks 212 arranged in arcs.
In this example, the large components 204, 206, 208 may be easily selected by choosing the large components 204, 206, 208 (e.g., by clicking via a single mouse using a selection tool). If the user wants to select the tile 210, the user would have to select many widgets with a single mouse click. Thus, FIG. 2 illustrates a structure in which the previously described selection via a drawing path may be more efficient than clicking on each object individually. Such a drawn path may correspond to a spray paint gesture. Such spray paint gestures may be made available to a user via a GUI with spray gesture tools or menu options on a toolbar that are selectable with input through an input device.
FIG. 3 illustrates an example display output 300 of a spray paint gesture tool being used to draw at least one path 302 over a portion of structure 202 to select cube volume 212. Here, at least one path 302 may correspond to a number of front-to-back sprayed stripes of an arc over the top of cube 212.
Such spray paint postures may also cover a small portion of the larger components 206 between the cube blocks 212, as shown in FIG. 3. However, the processor may operate to determine that the large part 206 is not selected because the large part 206 has only a small amount of paint thereon (i.e., only a small portion of the front facing surface of the part 206 has a change in appearance, depending on the drawing path 302). Thus, the described paint gesture tool enables a user to paint around the rectangular block 210, and also permits a small amount of paint to spill over the larger part 206 without causing the larger part 206 or the rectangular block 210 to be selected.
If the user wants to select cube blocks 212 and large components 206 below them, the user can use the described spray paint gesture tool as shown in exemplary display output 400 in FIG. 4. Here, the user draws an additional path 402 (e.g., paint stripes) across the sides and more of the top surface of the large part 206 in order to exceed a threshold (e.g., 50% of the forward facing area of the object) that causes the large part 206 to be selected.
In an example embodiment, the spray paint gesture tool may also be selected based on time. For example, FIG. 5 shows yet another example display output 500 where a user of structure 202 has drawn a single path 502 over larger part 204. In this example, although the path covers only a small surface area, the at least one processor may be configured to cause the larger component 204 to be selected based on not drawing any other paths on other objects for a predetermined amount of time.
Fig. 6 illustrates yet another example display output 600 in which drawing a path may provide a faster and less cumbersome process to select an object without accidentally selecting an undesired object. In this example, the fence 602 is located in front of the structure 202 shown in fig. 2-5 described above. Here, a portion of the pen includes a number of small cylinders 604.
As shown in display output 700 of fig. 7, to perform a delete or hide operation that removes fence 602 and small cylinder 604, a user can draw (e.g., virtually paint) a number of fast paths 702 (e.g., stripes) across fence 602 and across small cylinder 604 in order to select only a fence and a small cylinder. Deleting the selected set of objects will leave only the structure 202 shown in FIG. 1. Alternatively, an operation that isolates the selected object (e.g., based on a reverse selection of the selected object that removes all objects except the selected object) will leave only the selected objects 602, 604 shown in the example display output 800 of fig. 8. It will also be appreciated that the application software may enable objects that are not currently selected to become selected and objects that were previously selected to become unselected based on a reverse selection operation performed on the selected object.
In an example embodiment, the described selection mode may draw a path (which places paint on an object) at the location where the motion input moves the pointer. However, in yet another embodiment, the location at which the path is drawn may be limited to objects in certain locations. For example, the 3D workspace may have a first axis, a second axis, and a third axis that are orthogonal to each other. The plane of the 2D display screen (which displays the workspace) may extend across the first axis and the second axis, and the GUI may render the objects in the workspace to visually appear to have a depth extending backwards along the third axis. The described path may extend across the first axis and the second axis of the workspace in response to movement of the pointer across the display screen. However, the path may extend only to the following positions: at this position, the object is within a predetermined depth level along a third axis of the workspace.
Such a depth level may be determined based on where the path initially begins (e.g., by pressing a button on the input device). For example, fig. 9 illustrates another example display output showing the previously described structure 202 and fence 602. In this example, a user may initiate generation of motion input 904 by clicking on a display screen at a location such as the location of small cylinder 902 via a mouse (or other input device) and moving along a railing toward vertical post 906. In response to the motion input, the at least one processor may be configured to determine a range of depths along a third axis of the workspace (e.g., depths along the workspace) based on the locations of small cylinders 902 along the third axis. The depth range so determined may correspond, for example, to the location of the object relative to the third axis immediately before and after the location of the small cylinder 902, such as the portion to the left of the horizontal rail 908 and the portion of the vertical column 906.
Thus, as shown in fig. 9, a path 910 will be drawn from motion inputs painting the small cylinder 902, the left side of the horizontal rail 908, and the portion of the vertical column 906. However, if the user continues to move the mouse pointer for the motion input 904 past the lower component 206, the processor may operate to forgo drawing the path 910 on the component 206 because the portion of the component 206 that intersects the motion input may be outside the determined depth range.
Similarly, the user may operate a mouse or other input device to generate other motion inputs 912 up the vertical column 906. Accordingly, a corresponding path 914 may be drawn to paint the upper portion of the vertical column 906. However, if the motion input continues up past the larger components 204 and 208, the processor may operate to forgo drawing the path 914 on the components 204, 208, as these components will be outside the depth range determined based on the initial starting point of the path on the vertical column 906.
In an example embodiment, the GUI may have several different selectable selection modes, including the previously described surface selection mode in which the processor is configured to determine when to select an object based on the visible portion of the front facing surface region traversed by the at least one path. Such different selection modes may also include the previously described selection via a rectangular selection box and/or by individually clicking on each desired object to be selected.
In addition to these selection modes, the GUI may also include selectable pass-through selection modes. Instead of drawing a path on the object when the GUI is in the pass-through selection mode, the processor may be configured to cause the following objects to be selected in response to at least one second motion input by operation of the at least one input device: the object is traversed by at least one second path corresponding to at least one second motion input and has a position within a predetermined penetration depth range. Thus, objects that are partially and/or completely occluded and not occluded by other objects may be selected when the objects that are partially and/or completely occluded and not occluded by other objects are traversed by the path, directly or indirectly, and are within a predetermined depth range. Further, it should be appreciated that the depth range may be large enough (including without limitation) to select all objects in the workspace traversed by the second path.
For example, as shown in FIG. 9, if the motion input 912 is performed while the GUI is in the penetration selection mode, all visible objects traversed by the path of the motion input (e.g., the vertical column 906 and the larger components 204, 208) may be selected. Further, in addition to these larger components, smaller blocks that are not visible, but are hidden behind the vertical column 906 and traversed by the path may also be selected.
However, it should be appreciated that the depth of the penetration pattern may be limited by a user-configurable penetration depth level or range such that objects outside of the depth level or range are not selected even if objects outside of the depth level or range are traversed by the path. Furthermore, in the pass-through selection mode described, the selected objects that are visible may be visually highlighted by the display device such that the selected objects are visually distinct relative to the non-selected objects.
Additionally, in an example embodiment, when an object is determined to be selected, the selection mode may be associated with one or more different operations automatically performed by the processor on the selected object. For example, the previously described surface selection mode may be configured via the GUI such that once an object is selected, the object may become deleted, hidden, invisible, and/or de-emphasized (e.g., partially transparent) such that objects occluded by the selected object may become visible immediately when the occluded object becomes selected. In this way, the user may perform a painting operation that virtually paints the object and makes the object become a selected paint so as to strip a layer of the structure to cancel the hiding of the object in the lower position, and then the object in the lower position may also be virtually painted so as to strip another layer of the structure.
When an object is de-emphasized (e.g., partially transparent), such an object may also be made opaque again by a painting operation. For example, the processor may be configured to enable painting of the partially transparent object through the previously described drawing path in order to select the partially transparent object. Selecting an object that is not emphasized can immediately and automatically cause the object to become opaque again (e.g., no longer emphasized).
It should also be appreciated that the previously described functions performed in different operating modes of the GUI may be performed in the same mode. For example, the GUI may be configured to perform one or more of the selections previously described using different types of inputs or input gestures via one or more input devices at any one time without specifically changing the operating mode that affects how the inputs are interpreted in order to select an object.
In an example embodiment, the described application software component may modify a structure in response to an operation performed on a selected set of objects associated with the structure. The CAD data and/or product data corresponding to the modified structure may be stored in a CAD file and/or PLM database. The described application software components and/or other software applications may then perform various functions based on the modified structures stored in the CAD data and/or the product data.
Such functionality may include generating engineering drawings and/or bill of materials (BOM) that specify particular parts (and quantities thereof) that may be used to build the structure (based on CAD data and/or product data). Such engineering drawings and/or BOMs may be printed on paper by a printer, generated in electronic form (e.g., Microsoft Excel files or Acrobat PDF files), displayed by a display device, transmitted by email, stored in a data store, or otherwise generated in a form that can be used by individuals and/or machines to build a product corresponding to the designed structure. Further, it should be appreciated that a machine, such as a 3D printer, may use data corresponding to CAD data to generate a physical structure (e.g., a part).
Referring now to FIG. 10, various example methods are shown and described. While the methods are described as a series of acts performed in a certain sequence, it will be appreciated that the methods may not be limited by the order of the sequence. For example, some acts may occur in a different order than described herein. Additionally, one action may occur concurrently with another action. Moreover, not all acts may be required to implement a methodology described herein in some cases.
It is important to note that while the present disclosure includes a description in the context of a fully functional system and/or series of acts, those skilled in the art will appreciate that the mechanisms of the present disclosure and/or at least a portion of the acts described are capable of being distributed in the form of computer-executable instructions embodied in any of a variety of forms of non-transitory machine-usable, computer-usable, or computer-readable medium, and that the present disclosure applies equally regardless of the particular type of instruction or signal bearing or storage medium utilized to actually carry out the distribution. Examples of non-transitory machine-usable/readable or computer-usable/readable media include: ROM, EPROM, tape, floppy disk, hard disk drive, SSD, flash memory, CD, DVD, and Blu-ray disk. Computer-executable instructions may include routines, subroutines, programs, applications, modules, libraries, threads of execution, and the like. Additionally, results of acts of the methods may be stored in a computer readable medium, may be displayed on a display device, and the like.
Referring now to FIG. 10, a methodology 1000 that facilitates selection of an object is illustrated. The method may begin at 1002, and the method may include acts that are performed by operation of at least one processor. These acts may include an act 1004 of causing a display device to display a plurality of individually selectable three-dimensional objects of a structure. Further, the method may include an act 1006 of determining at least one path through the object based on at least one motion input received through operation of at least one input device. Additionally, the method may include an act 1008 of causing at least two of the objects to be selected in a group while at least one of the objects remains unselected based on an amount of surface area of each object traversed by the at least one path. Further, the method may include an act 1010 of, in response to at least one operation input received through the at least one input device, causing at least one of the plurality of operations to be performed on the selected group of at least two objects based on whether the object is selected, and not performing at least one of the plurality of operations on at least one object that remains unselected. At 1012, the method may end.
It should be recognized that method 1000 may include other acts and features previously discussed with respect to system 100. For example, the method may include an act of causing a change in appearance of a portion of each particular object traversed by at least one path. In this example, at least two of the objects with a change in appearance may be selected in a group while at least one of the objects with a change in appearance remains unselected based on an amount of surface area of each object traversed by at least one path.
Additionally, the method may include generating, by the display device, a Graphical User Interface (GUI) that displays the objects in the workspace and that is capable of performing a plurality of operations on the selected group of objects in response to other inputs received through operation of the at least one input device. Such operations may include deleting the selected object, hiding the selected object, copying the selected object, moving the selected object, and displaying information about the selected object.
As previously discussed, each of the objects displayed by the display device includes a front facing surface area that is fully visible when displayed by the display device while unobstructed by one or more other objects. In an example embodiment, the method may include determining an amount of front facing surface area of each object. Further, step 1010 may include causing at least two objects having a change in appearance to be selected while at least one object having a change in appearance remains unselected based on the determined amount of visible portion of the front facing surface area of each object traversed by the at least one path.
Additionally, the method may include determining a threshold amount corresponding to a fractional amount (fractional amount) of the forward facing surface region. The step 1010 of causing at least two objects having a change in appearance to be selected may be performed based on the determined amount of front facing surface area of each object traversed by the at least one path being equal to or greater than a threshold amount of visible portions.
Additionally, in an example implementation, act 1006 of determining at least one path may be performed based on a determination of where a virtual paint that is sprayed onto a visible surface of an object in response to the at least one motion input will be located. The change in appearance of the portion of the object may correspond to a virtual representation of paint sprayed onto the object based on the determined path.
It should be appreciated that in an example embodiment, the GUI is configured to display objects in a three-dimensional workspace having a first axis, a second axis, and a third axis that are orthogonal to one another. In this example, the path extends along a first axis and a second axis of the workspace (which may correspond to the plane of the display screen). The third axis may correspond to a virtual depth of a three-dimensional workspace depicting the object through the display screen. An example method may include determining a depth range along a third axis of the workspace. Additionally, the act of causing 1008 the object to have the appearance change may be based on the object being displayed in a position in the workspace that is within the depth range. Further, at least one object may not change in appearance based on the position of the object being outside the depth range.
As previously discussed, the path may include a starting point at the first object. Then, an act 1006 of determining a depth range may include determining a range of depths based on the depth of the visible surface region of the first object traversed by the path. Thus, the particular depth range of a particular path may be set by the user selecting where to begin drawing the path. It should also be appreciated that each new path initiated by the user may establish a different depth range based on the initial depth of the visible surface of the object first traversed by the new path.
In an example embodiment, the GUI may enable the current selection mode to change between a surface selection mode and a penetration selection mode. An act 1010 of causing at least two of the objects to be selected based on the visible portion of the front facing surface region traversed by the at least one path may be performed based on the GUI being in a surface selection mode. An example method may include an act of changing a current selection mode to a pass through selection mode. Once in the penetration selection mode, the method may include the acts of: in response to at least one second motion input by operation of the at least one input device, causing the following objects to be selected: the object is completely occluded by other objects that are traversed by at least one second path corresponding to at least one second motion input and have a position within a predetermined penetration depth range.
In an example embodiment, the object that becomes selected may be visually highlighted. Thus, an example method may include the acts of: in response to determining that at least two objects are selected based on the visible portion of the front-facing surface region traversed by the at least one path, causing other appearance changes to occur to the at least two objects in addition to or in lieu of the following appearance changes: change in appearance before determining that the at least two objects are selected.
As previously discussed, modifications to the structure based on operations performed on the selected set of objects may be maintained to the CAD file and/or the PLM data store as CAD data and/or product data. Actions associated with generating the engineering drawings and/or BOMs may then be performed based on the CAD data or the product data. Further, the method may include an individual manually building the structure based on the engineering drawing and/or the BOM. Further, such actions may include a machine (e.g., a 3D printer) building a structure based on the CAD data.
As previously described, the acts associated with the methods (in addition to any described manual acts, such as acts of manually building structures) may be performed by one or more processors. Such processors may be included in one or more data processing systems, for example, which execute software components that are operable to cause those actions to be performed by the one or more processors. In an example embodiment, such software components may be written as follows: a software environment/language/framework such as Java, JavaScript, Python, C #, C + +, or any other software tool capable of generating components and graphical user interfaces configured to perform the actions and features described herein.
Fig. 11 shows a block diagram of a data processing system 1100 (also referred to as a computer system) in which embodiments may be implemented as part of a PLM, CAD, and/or other system operatively configured, or configured, for example, by software, to perform the processes described herein. The depicted data processing system includes at least one processor 1102 (e.g., CPU) that may be connected to one or more bridges/controllers/buses 1104 (e.g., northbridge, southbridge). For example, one of the buses 1104 may include one or more I/O buses, such as a PCI Express bus. Additionally, connected to various buses in the depicted example may include a main memory 1106(RAM) and a graphics controller 1108. Graphics controller 1108 may be connected to one or more display devices 1110. It should also be noted that in some embodiments, one or more controllers (e.g., graphics, south bridge) may be integrated with the CPU (on the same chip or die). Examples of CPU architectures include IA-32, x86-64, and ARM processor architectures.
Other peripheral devices connected to the one or more buses may include a communication controller 1112 (ethernet controller, WiFi controller, cellular controller) which may operate to connect to a Local Area Network (LAN), Wide Area Network (WAN), cellular network, and/or other wired or wireless network 1114 or communication equipment.
Other components connected to the various buses may include one or more I/O controllers 1116, such as a USB controller, a bluetooth controller, and/or a dedicated audio controller (connected to a speaker and/or microphone). It will also be appreciated that various peripheral devices may be connected to the USB controller (via various USB ports), including input devices 1118 (e.g., keyboard, mouse, touch screen, trackball, joystick, camera, microphone, scanner, motion sensing device), output devices 1120 (e.g., printer, speaker), or any other type of device that may operate to provide input or receive output from the data processing system. In addition, it will be appreciated that many devices, referred to as input devices or output devices, may both provide input for communication with the data processing system and receive output for communication with the data processing system. Further, it should be appreciated that other peripheral hardware 1122 connected to I/O controller 1116 may include any type of device, machine, or component configured to communicate with a data processing system.
Additional components connected to the various buses may include one or more storage controllers 1124 (e.g., SATA). The storage controller may be connected to a storage device 1126, such as one or more storage drives and/or any associated removable media, which may be any suitable non-transitory machine-usable or machine-readable storage media. Examples include non-volatile devices, read-only devices, writable devices, ROM, EPROM, tape storage, floppy disk drives, hard disk drives, Solid State Drives (SSD), flash memory, optical disk drives (CD, DVD, blu-ray), and other known optical, electrical, or magnetic storage device drives and/or computer media. Additionally, in some examples, a storage device (e.g., an SSD) may be directly connected to I/O bus 1104 (e.g., a PCI Express bus).
A data processing system according to embodiments of the present disclosure may include an operating system 1128, software/firmware 1130, and data storage 1132 (which may be stored on storage device 1126 and/or memory 1106). Such operating systems may employ a Command Line Interface (CLI) shell (shell) and/or a Graphical User Interface (GUI) shell. The GUI shell permits multiple display windows to be presented simultaneously in the graphical user interface, where each display window provides an interface to a different application or an interface to a different instance of the same application. A cursor or pointer in the graphical user interface may be manipulated by a user through a pointing device (e.g., a mouse or a touch screen). The position of the cursor/pointer may be changed and/or an event such as clicking a mouse button or touching a touch screen may be generated to stimulate the desired response. Examples of operating systems that may be used in a data processing system may include the Microsoft Windows, Linux, UNIX, iOS, and Android operating systems. Further, examples of data stores include data files, data tables, relational databases (e.g., Oracle, Microsoft SQL Server), database servers, or any other structure and/or device capable of storing data that is retrievable by a processor.
The communication controller 1112 may be connected to a network 1114 (not part of the data processing system 1100), the network 1114 may be any public or private data processing system network or combination of networks known to those skilled in the art, including the internet. The data processing system 1100 may communicate with one or more other data processing systems, such as a server 1134 (also not part of the data processing system 1100), through a network 1114. However, alternative data processing systems may correspond to multiple data processing systems implemented as part of a distributed system in which processors associated with several data processing systems may communicate over one or more network connections and may collectively perform tasks described as being performed by a single data processing system. Thus, it should be understood that when reference is made to a data processing system, such a system may be implemented across several data processing systems organized in a distributed system in communication with each other over a network.
Further, the term "controller" means any device, system or component thereof that controls at least one operation, such a device being implemented in hardware, firmware, software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.
Additionally, it should be appreciated that the data processing system may be implemented as a virtual machine architecture or as a virtual machine in a cloud environment. For example, the processor 1102 and associated components may correspond to virtual machines executing in a virtual machine environment of one or more servers. Examples of virtual machine architectures include VMware ESCI, Microsoft Hyper-V, Xen, and KVM.
Those of ordinary skill in the art will appreciate that the hardware depicted for the data processing system may vary for particular implementations. For example, the data processing system 1100 in this example may correspond to a computer, a workstation, and/or a server. However, it should be appreciated that alternative embodiments of the data processing system may be configured with corresponding or alternative components, for example in the form of a mobile phone, tablet computer, control panel or any other system, that may operate to process data and implement the functions and features described herein associated with the operation of the data processing system, computer, processor and/or controller described herein. The depicted example is provided for illustrative purposes only and is not meant to imply architectural limitations with respect to the present disclosure.
As used herein, the terms "component" and "system" are intended to encompass hardware, software, or a combination of hardware and software. Thus, for example, a system or component may be a process, a process executing on a processor, or a processor. In addition, a component or system may be located on a single device or distributed across multiple devices.
Also, as used herein, a processor corresponds to any electronic device configured via hardware circuitry, software, and/or firmware to process data. For example, a processor described herein may correspond to one or more (or a combination) of the following: a microprocessor, a CPU, an FPGA, an ASIC, or any other Integrated Circuit (IC) or any other type of circuit capable of processing data in a data processing system, which may be in the form of a controller board, a computer, a server, a mobile phone, and/or any other type of electronic device.
Those skilled in the art will recognize that the full structure and operation of all data processing systems suitable for use with the present disclosure is not shown or described herein for simplicity and clarity. Rather, only such data processing systems as are unique to the present disclosure or are necessary for an understanding of the present disclosure are shown and described. The remaining construction and operation of the data processing system 1100 may conform to any of the various current implementations and practices known in the art.
Although example embodiments of the present disclosure have been described in detail, those skilled in the art will appreciate that various changes, substitutions, variations, and alterations to the disclosure herein may be made without departing from the spirit and scope of the disclosure in its broadest form.
None of the description in this application should be read as implying that any particular element, step, action, or function is an essential element which must be included in the claim scope: the scope of patented subject matter is defined only by the claims that follow. Furthermore, unless the exact word "means for.

Claims (15)

1. A system (100) for selecting an object, comprising:
at least one processor (102) configured to perform the following operations via executable instructions included in at least one memory:
causing a display device (108) to display a plurality of individually selectable three-dimensional objects (112, 114, 116) in a three-dimensional workspace having first, second, and third axes that are orthogonal to one another;
determining at least one path (128) through the object based on at least one motion input (130) received through operation of at least one input device (110), wherein the path extends along a first axis and a second axis of the workspace, and wherein the path includes a start point at a first object;
determining a depth range along the third axis of the workspace based on the depth of the visible surface area of the first object traversed by the path;
based on a first set of objects displayed in the workspace at locations within the depth range, causing at least some of the set of objects traversed by at least one path to have a change in appearance, and based on the location of at least one object outside the depth range without changing the appearance of the at least one object, wherein each object of the set of objects comprises a front-facing surface region that is fully visible when displayed by the display device while being unobstructed by one or more other objects;
based on the amount of surface area of each object of the set of objects having a change in appearance, causing at least two objects (112, 114) of the set of objects to be selected in a set while at least one object (116) of the set of objects remains unselected, wherein objects that do not change in appearance when traversed by the at least one path also remain unselected;
in response to at least one operation input (132) received through the at least one input device, causing at least one of a plurality of operations to be performed on the selected group of at least two objects and at least one of the plurality of operations to not be performed on objects that remain unselected;
generating, by the display device, a Graphical User Interface (GUI), wherein the GUI enables a current selection mode to change between a surface selection mode and a penetration selection mode;
determining when to select an object based on a visible portion of a forward facing surface region having a change in appearance when the GUI is in the surface selection mode; and
when the GUI is in the penetration selection mode, in response to at least one second motion input by operation of at least one input device, causing an object that is fully occluded by other objects to be selected, wherein the fully occluded object is traversed by at least one second path corresponding to the at least one second motion input and has a position within a predetermined penetration depth range.
2. The system of claim 1, wherein the at least one processor is configured to:
based on the amount of surface area of each object of the set of objects having a change in appearance, causing the at least two objects of the set of objects having a change in appearance to be selected in the set while the at least one object of the set of objects having a change in appearance remains unselected.
3. The system of claim 2, wherein the GUI displays objects in the workspace and is capable of performing a plurality of operations on the selected group of objects in response to other inputs received through operation of the at least one input device, the operations including deleting the selected object, hiding the selected object, copying the selected object, moving the selected object, and displaying information about the selected object.
4. The system of claim 3, wherein the at least one processor is configured to determine an amount of front facing surface area for each object in the set of objects, wherein the at least one processor is configured to cause the at least two objects in the set of objects having a change in appearance to be selected while the at least one object in the set of objects having a change in appearance remains unselected based on a visible portion of the determined amount of front facing surface area for each object in the set of objects having a change in appearance.
5. The system of claim 4, wherein the at least one processor is configured to determine a threshold amount corresponding to a partial amount of the forward facing surface region, wherein the at least one processor is configured to determine that the following objects of the set of objects are selected: the objects have visible portions that vary in appearance by an amount equal to or greater than the threshold amount of the determined front facing surface area of each object.
6. The system of claim 5, wherein the change in appearance of the portion of the object corresponds to a virtual representation of paint sprayed onto the object.
7. The system of claim 6, wherein when an object is determined to be selected based on the visible portion of the front facing surface region having a change in appearance, the processor is configured to cause the object to undergo other changes in appearance in addition to or in place of the change in appearance prior to determining that the object is selected.
8. A method for selecting an object, comprising:
by operation of at least one processor (102):
causing a display device (108) to display a plurality of individually selectable three-dimensional objects (112, 114, 116) in a three-dimensional workspace having first, second, and third axes that are orthogonal to one another;
determining at least one path (128) through the object based on at least one motion input (130) received through operation of at least one input device (110), wherein the path extends along a first axis and a second axis of the workspace, and wherein the path includes a start point at a first object;
determining a depth range along the third axis of the workspace based on a depth of a visible surface area of the first object traversed by the path,
based on a first set of objects displayed in the workspace at locations within the depth range, causing at least some of the set of objects traversed by at least one path to have a change in appearance, and based on the location of at least one object outside the depth range without changing the appearance of the at least one object, wherein each object of the set of objects includes a front-facing surface region that is fully visible when displayed by the display device while unobstructed by one or more other objects;
based on the amount of surface area of each object of the set of objects having a change in appearance, causing at least two objects (112, 114) of the set of objects to be selected in a set while at least one object (116) of the set of objects remains unselected, wherein objects that do not change in appearance when traversed by the at least one path also remain unselected;
in response to at least one operation input (132) received through the at least one input device, causing at least one of a plurality of operations to be performed on the selected group of at least two objects and at least one of the plurality of operations to not be performed on objects that remain unselected;
generating, by the display device, a Graphical User Interface (GUI) that enables a current selection mode to change between a surface selection mode and a penetration selection mode;
determining which objects to select based on the portion of the visible front-facing surface region having the change in appearance when the GUI is in the surface selection mode;
changing the current selection mode to the penetration selection mode; and
in response to at least one second motion input by operation of at least one input device, causing an object that is fully occluded by other objects to be selected, wherein the fully occluded object is traversed by at least one second path corresponding to the at least one second motion input and has a position within a predetermined penetration depth range.
9. The method of claim 8, further comprising:
wherein the at least two objects of the set of objects having a change in appearance are caused to be selected in the set based on an amount of surface area of each object of the set of objects having a change in appearance while the at least one object of the set of objects having a change in appearance remains unselected.
10. The method of claim 9, wherein the GUI displays objects in the workspace and is capable of performing a plurality of operations on the selected group of objects in response to other inputs received through operation of the at least one input device, the operations including deleting the selected object, hiding the selected object, copying the selected object, moving the selected object, and displaying information about the selected object.
11. The method of claim 10, further comprising:
determining, by operation of the at least one processor, an amount of front facing surface area for each object in the set of objects,
wherein the at least two objects of the set of objects having a change in appearance are caused to be selected while the at least one object of the set of objects having a change in appearance remains unselected based on the determined amount of visible portion of the front facing surface area of each object of the set of objects having a change in appearance.
12. The method of claim 11, further comprising:
determining, by operation of the at least one processor, a threshold amount corresponding to a partial amount of the forward facing surface region;
wherein the at least two objects of the set of objects having a change in appearance are caused to be selected based on the determined amount of front facing surface area visible portion of each object having a change in appearance equal to or greater than the threshold amount.
13. The method of claim 12, wherein the change in appearance of the portion of the object corresponds to a virtual representation of paint sprayed onto the object.
14. The method of claim 13, further comprising: in response to determining that the at least two objects of the set of objects are selected based on the visible portion of the front facing surface region having a change in appearance, in addition to or in lieu of the change in appearance prior to determining that the at least two objects are selected, causing other changes in appearance to the at least two objects.
15. A non-transitory computer-readable medium encoded with executable instructions that, when executed by at least one processor, cause the at least one processor to perform a method for selecting an object, the method comprising:
causing a display device to display a plurality of individually selectable three-dimensional objects in a three-dimensional workspace having a first axis, a second axis, and a third axis that are orthogonal to one another;
determining at least one path through the object based on at least one motion input received through operation of at least one input device, wherein the path extends along a first axis and a second axis of the workspace, and wherein the path includes a starting point at a first object;
determining a depth range along the third axis of the workspace based on the depth of the visible surface area of the first object traversed by the path;
based on a first set of objects displayed in the workspace at locations within the depth range, causing at least some of the set of objects traversed by at least one path to have a change in appearance, and based on the location of at least one object outside the depth range without changing the appearance of the at least one object, wherein each object of the set of objects includes a front-facing surface region that is fully visible when displayed by the display device while unobstructed by one or more other objects;
based on an amount of surface area of each object of the set of objects having a change in appearance, causing at least two objects of the set of objects to be selected in a set while at least one object of the set of objects remains unselected, wherein objects that do not change in appearance when traversed by the at least one path also remain unselected;
in response to at least one operation input received through the at least one input device, causing at least one operation of a plurality of operations to be performed on the selected group of at least two objects and at least one operation of the plurality of operations to not be performed on objects that remain unselected;
generating, by the display device, a Graphical User Interface (GUI), wherein the GUI enables a current selection mode to change between a surface selection mode and a penetration selection mode;
determining which objects to select based on the portion of the visible front-facing surface region having the change in appearance when the GUI is in the surface selection mode;
changing the current selection mode to the penetration selection mode; and
in response to at least one second motion input by operation of at least one input device, causing an object that is fully occluded by other objects to be selected, wherein the fully occluded object is traversed by at least one second path corresponding to the at least one second motion input and has a position within a predetermined penetration depth range.
CN201680050238.2A 2015-08-28 2016-07-29 Object selection system and method Active CN107924268B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/838,957 US9927965B2 (en) 2015-08-28 2015-08-28 Object selection system and method
US14/838,957 2015-08-28
PCT/US2016/044633 WO2017039902A1 (en) 2015-08-28 2016-07-29 Object selection system and method

Publications (2)

Publication Number Publication Date
CN107924268A CN107924268A (en) 2018-04-17
CN107924268B true CN107924268B (en) 2021-11-02

Family

ID=56616084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680050238.2A Active CN107924268B (en) 2015-08-28 2016-07-29 Object selection system and method

Country Status (5)

Country Link
US (1) US9927965B2 (en)
EP (1) EP3323034A1 (en)
JP (1) JP6598984B2 (en)
CN (1) CN107924268B (en)
WO (1) WO2017039902A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6572099B2 (en) * 2015-11-06 2019-09-04 キヤノン株式会社 Imaging apparatus, control method therefor, and program
US10497159B2 (en) * 2017-10-31 2019-12-03 The Boeing Company System and method for automatically generating illustrations
US11935183B2 (en) * 2020-01-10 2024-03-19 Dirtt Environmental Solutions Ltd. Occlusion solution within a mixed reality design software application
US11475173B2 (en) * 2020-12-31 2022-10-18 Dassault Systémes SolidWorks Corporation Method for replicating a component mating in an assembly
CN113326849B (en) * 2021-07-20 2022-01-11 广东魅视科技股份有限公司 Visual data acquisition method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0566293A2 (en) * 1992-04-15 1993-10-20 Xerox Corporation Graphical drawing and editing systems and methods therefor
CN102150104A (en) * 2009-07-21 2011-08-10 晶翔微系统股份有限公司 Selection device and method
CN102708585A (en) * 2012-05-09 2012-10-03 北京像素软件科技股份有限公司 Method for rendering contour edges of models
CN103278815A (en) * 2012-01-11 2013-09-04 索尼公司 Imaging device and method for imaging hidden objects

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5689628A (en) * 1994-04-14 1997-11-18 Xerox Corporation Coupling a display object to a viewpoint in a navigable workspace
JPH0863620A (en) * 1994-08-26 1996-03-08 Matsushita Electric Ind Co Ltd Method and device for processing graphic
JP3276068B2 (en) * 1997-11-28 2002-04-22 インターナショナル・ビジネス・マシーンズ・コーポレーション Object selection method and system
JP3033956B2 (en) * 1998-07-23 2000-04-17 インターナショナル・ビジネス・マシーンズ・コーポレイション Method for changing display attributes of graphic objects, method for selecting graphic objects, graphic object display control device, storage medium storing program for changing display attributes of graphic objects, and program for controlling selection of graphic objects Storage media
US6727924B1 (en) * 2000-10-17 2004-04-27 Novint Technologies, Inc. Human-computer interface including efficient three-dimensional controls
EP2010999A4 (en) * 2006-04-21 2012-11-21 Google Inc System for organizing and visualizing display objects
US8907947B2 (en) * 2009-12-14 2014-12-09 Dassault Systèmes Method and system for navigating in a product structure of a product
US8686997B2 (en) * 2009-12-18 2014-04-01 Sassault Systemes Method and system for composing an assembly
US9524523B2 (en) * 2010-09-01 2016-12-20 Vigor Systems Inc. Fail-safe switch for media insertion server in a broadcast stream

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0566293A2 (en) * 1992-04-15 1993-10-20 Xerox Corporation Graphical drawing and editing systems and methods therefor
CN102150104A (en) * 2009-07-21 2011-08-10 晶翔微系统股份有限公司 Selection device and method
CN103278815A (en) * 2012-01-11 2013-09-04 索尼公司 Imaging device and method for imaging hidden objects
CN102708585A (en) * 2012-05-09 2012-10-03 北京像素软件科技股份有限公司 Method for rendering contour edges of models

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
D3: an Immersive aided design deformation method;Vincent Meyrueis;《Proceedings of the 16th ACM symposium on virtual reality software and technology》;20091120;第2.1节 *

Also Published As

Publication number Publication date
JP2018530052A (en) 2018-10-11
EP3323034A1 (en) 2018-05-23
US20170060384A1 (en) 2017-03-02
CN107924268A (en) 2018-04-17
JP6598984B2 (en) 2019-10-30
US9927965B2 (en) 2018-03-27
WO2017039902A1 (en) 2017-03-09

Similar Documents

Publication Publication Date Title
CN107924268B (en) Object selection system and method
US10061496B2 (en) Snapping of object features via dragging
EP2333651B1 (en) Method and system for duplicating an object using a touch-sensitive display
Seo et al. Direct hand touchable interactions in augmented reality environments for natural and intuitive user experiences
US9563980B2 (en) Grip manipulatable shadows in 3D models
US11061529B2 (en) Generating contextual guides
JP2014186361A5 (en)
DE112012006199T5 (en) Virtual hand based on combined data
CN105103112A (en) Apparatus and method for manipulating the orientation of object on display device
JP2011128962A (en) Information processing apparatus and method, and computer program
CN102722338A (en) Touch screen based three-dimensional human model displaying and interacting method
US20140040832A1 (en) Systems and methods for a modeless 3-d graphics manipulator
JP7340927B2 (en) A method for defining drawing planes for the design of 3D objects
CN105046748B (en) The 3D photo frame apparatus of image can be formed in a kind of three-dimensional geologic scene
JP5767371B1 (en) Game program for controlling display of objects placed on a virtual space plane
KR20150073100A (en) A device with a touch-sensitive display comprising a mechanism to copy and manipulate modeled objects
Mohanty et al. Kinesthetically augmented mid-air sketching of multi-planar 3D curve-soups
JP2016016319A (en) Game program for display-controlling objects arranged on virtual spatial plane
JP4907156B2 (en) Three-dimensional pointing method, three-dimensional pointing device, and three-dimensional pointing program
Bruno et al. The over-sketching technique for free-hand shape modelling in Virtual Reality
CN107895388A (en) Method and device for filling colors of graph, computer equipment and storage medium
US20230196704A1 (en) Method for duplicating a graphical user interface (gui) element in a 3d scene
RU2519286C2 (en) Contactless computer control method (versions)
WO2014038217A1 (en) Texture-drawing support device
US20140225903A1 (en) Visual feedback in a digital graphics system output

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: texas

Applicant after: SIEMENS INDUSTRY SOFTWARE Ltd.

Address before: texas

Applicant before: SIEMENS PRODUCT LIFECYCLE MANAGEMENT SOFTWARE Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant