CN111091611B - Workshop digital twinning-oriented augmented reality system and method - Google Patents

Workshop digital twinning-oriented augmented reality system and method Download PDF

Info

Publication number
CN111091611B
CN111091611B CN201911352218.2A CN201911352218A CN111091611B CN 111091611 B CN111091611 B CN 111091611B CN 201911352218 A CN201911352218 A CN 201911352218A CN 111091611 B CN111091611 B CN 111091611B
Authority
CN
China
Prior art keywords
workshop
equipment
information
current frame
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911352218.2A
Other languages
Chinese (zh)
Other versions
CN111091611A (en
Inventor
陈成军
丁旭彤
李东年
洪军
李正浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Qingdao University of Technology
Chongqing Institute of Green and Intelligent Technology of CAS
Original Assignee
Xian Jiaotong University
Qingdao University of Technology
Chongqing Institute of Green and Intelligent Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University, Qingdao University of Technology, Chongqing Institute of Green and Intelligent Technology of CAS filed Critical Xian Jiaotong University
Priority to CN201911352218.2A priority Critical patent/CN111091611B/en
Publication of CN111091611A publication Critical patent/CN111091611A/en
Application granted granted Critical
Publication of CN111091611B publication Critical patent/CN111091611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to an augmented reality system and method for workshop digital twin, comprising a physical workshop, a workshop digital twin system, a camera group, an image acquisition module, a workshop three-dimensional model labeling module, a device identification module, a data query and AR registration module and an AR display module, wherein a workshop video image is acquired through the camera group, devices contained in the video image are identified and determined through the device identification, and digital twin information is dynamically overlapped on or near a device image area on the video image by utilizing an augmented reality technology, so that AR display of workshop device information is realized. According to the invention, a complex three-dimensional model of a physical workshop system is not required to be established, a video image is used for replacing workshop three-dimensional model display, and the picture display is smoother; whether the information of each device is displayed, the type of the display interface, the interface parameters and the like can be set, the display mode is more flexible, and the operation is simpler and more convenient; the device identification pointed by the mouse can be judged, and the related information of the device can be displayed.

Description

Workshop digital twinning-oriented augmented reality system and method
Technical Field
The invention relates to an augmented reality system and method for workshop digital twin, and belongs to the field of intelligent manufacturing.
Background
The current intelligent factory and intelligent workshop digital twin model needs to build a complex three-dimensional model of the workshop, then digital twin information generated by the workshop digital twin system is displayed on the three-dimensional model corresponding to workshop equipment, the whole modeling process works extremely greatly, and once the workshop structure changes, the three-dimensional model needs to be changed, so that the later maintenance workload is large, and the occupied computing resource is large when the complex model is rendered.
Disclosure of Invention
In order to solve the technical problems, the invention provides a workshop digital twinning-oriented augmented reality system which does not need to establish a complex three-dimensional model of a physical workshop system, superimposes display equipment information on a workshop video image, has a short development period and can flexibly set display effects.
The technical scheme of the invention is as follows:
the workshop digital twin-oriented augmented reality system comprises a physical workshop and a workshop digital twin system, wherein the workshop digital twin system outputs digital twin information and further comprises the following modules: camera group: the method comprises the steps of fixing the video image in a physical workshop, and collecting a video image of the current state of the physical workshop; and an image acquisition module: collecting workshop video images shot by a camera selected by a user; and a workshop three-dimensional model labeling module: constructing a virtual three-dimensional model of a physical workshop, wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shapes, each proxy shape represents the spatial shape and position of the equipment in the physical workshop, then labeling each proxy shape, establishing a one-to-one correspondence between labeling and equipment identification, and generating a workshop three-dimensional labeling model; and a device identification module: identifying equipment needing to display equipment information according to user setting, specifically, acquiring a current frame image from the workshop video image, and identifying equipment identifications corresponding to the equipment needing to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional labeling model and the equipment in the current frame image and labeling of each agent shape body; data query and AR registration module: inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information, and determining an information display area of each equipment according to an imaging area of each equipment on a current frame image; AR display module: and superposing the digital twin information of the equipment on the information display area of the equipment on the current frame image, thereby realizing AR display of the workshop equipment information.
More preferably, the camera group comprises a plurality of cameras installed in different areas, each camera comprises a lens, an image sensor, a cradle head and an image sensor gesture detection module, the cradle head is used for controlling the azimuth of the image sensor, and the image sensor gesture detection module is used for detecting the direction of the image sensor; the image acquisition module is used for acquiring the position of an image sensor of the currently selected camera and the gesture of the image sensor; the equipment identification module acquires the position and the gesture of the image sensor and sends the information to the workshop three-dimensional model labeling module; the workshop three-dimensional model labeling module sets the position and the gesture of a virtual image sensor of the currently selected virtual camera according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of a physical workshop, and a virtual synthesized image of the workshop three-dimensional labeling model is synthesized according to the camera imaging model; the identification module determines the position corresponding relation between the agent shape body in the virtual synthesized image and the equipment in the current frame image according to the consistency of the imaging model, the imaging position and the imaging gesture of the virtual synthesized image and the current frame image, reads the label of the corresponding agent shape body in the virtual synthesized image according to the equipment to be identified, which is set by a user, and determines the equipment identification and the imaging area of the equipment in the current frame image according to the label.
More preferably, the conversion relationship between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system is an equivalent relationship, and specifically: the workshop three-dimensional model labeling module further performs: unifying the coordinate system of the workshop three-dimensional annotation model and the physical workshop coordinate system, enabling the equipment coordinates of the physical workshop to be consistent with the corresponding proxy shape coordinates in the workshop three-dimensional annotation model, and enabling the coordinates of each camera of the physical workshop to be consistent with the corresponding virtual camera coordinates in the workshop three-dimensional annotation model.
More preferably, the labels are color labels, the proxy shape bodies corresponding to different devices render different colors, and a one-to-one mapping relation between the colors and the device identifications is established; according to the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, the color value of the pixel is read to determine the equipment identification of the equipment corresponding to the color value in the current frame image, and the imaging area of the equipment in the current frame image is determined according to the area range where the pixel of the color value is positioned.
More preferably, the system further comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines a plurality of types of display interfaces, and the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; after the data query and AR registration module obtains the digital twin information of the equipment and determines the information display area of the equipment, the data query and AR registration module sends the type of a display interface, the information display area and information or parameters to be displayed to the AR display module according to the setting of the equipment in the workshop information display setting module; the AR display module acquires a display interface of a corresponding type from the AR information display interface library, and superimposes the display interface on an information display area of the equipment in the current frame image, and information or parameters of the equipment are displayed through the display interface.
Preferably, the device that needs to display the device information is identified according to the user setting, specifically: the method comprises the steps of setting to display all equipment information and identifying all equipment in a current frame image; or, the device identification module reads the position of the mouse on the current frame image, and determines the position of the mouse in the three-dimensional annotation model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional annotation model coordinate system, so as to determine the annotation corresponding to the proxy shape body at the position, and identify the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area thereof on the current frame image according to the annotation.
The invention also provides an augmented reality method facing the workshop digital twin.
An workshop digital twin-oriented augmented reality method comprises the following steps: step 1, constructing a virtual three-dimensional model of a physical workshop, wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the spatial shape and position of the equipment in the physical workshop, then labeling each proxy shape body, establishing a one-to-one correspondence between labeling and equipment identification, and generating a workshop three-dimensional labeling model; step 2, fixing a plurality of cameras in a physical workshop, and collecting video images of the current state of the physical workshop through the cameras; step 3, collecting workshop video images shot by a camera selected by a user; step 4, identifying the equipment needing to display the equipment information according to the user setting: acquiring a current frame image from the workshop video image, and identifying equipment identifiers corresponding to equipment to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional annotation model and equipment in the current frame image and the annotation of each agent shape body; step 5, inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information, and determining an information display area of the equipment according to an imaging area of the equipment on the current frame image; and 6, superposing the digital twin information of the equipment on an information display area of the equipment on the current frame image, so as to realize AR display of workshop equipment information.
More preferably, in the step 2, the camera group includes a plurality of cameras installed in different areas, each camera includes a lens, an image sensor, a pan-tilt, and an image sensor gesture detection module, where the pan-tilt is used to control the azimuth of the image sensor, and the image sensor gesture detection module is used to detect the direction of the image sensor; in the step 3, the position of the image sensor of the currently selected camera and the gesture of the image sensor are also collected; in the step 4, firstly, setting the position and the posture of a virtual image sensor of a virtual camera corresponding to a currently selected camera according to the position and the posture of the image sensor and the conversion relation between a physical workshop coordinate system and a workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of a physical workshop, and synthesizing a virtual synthesized image of the workshop three-dimensional labeling model according to the camera imaging model; and then, according to the consistency of the imaging model, the imaging position and the imaging posture of the virtual composite image and the current frame image, determining the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, reading the label of the corresponding agent shape body in the virtual composite image according to the equipment to be identified, which is set by a user, and determining the equipment identification and the imaging area of the equipment in the current frame image according to the label.
More preferably, the labels are color labels, the proxy shape bodies corresponding to different devices render different colors, and a one-to-one mapping relation between the colors and the device identifications is established; according to the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, the color value of the pixel is read to determine the equipment identification of the equipment corresponding to the color value in the current frame image, and the imaging area of the equipment in the current frame image is determined according to the area range where the pixel of the color value is positioned.
More preferably, the method further adopts a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines a plurality of types of display interfaces, and the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; in the step 5, after obtaining the digital twin information of the equipment and determining the information display area of the equipment, reading the setting of the equipment in the workshop information display setting module, and obtaining the type of the display interface and the information or parameters to be displayed; in the step 6, a display interface of a corresponding type is obtained from the AR information display interface library according to the obtained display interface type, the display interface is superimposed on the information display area of the device in the current frame image, and the device information or parameters to be displayed are displayed through the display interface.
Preferably, the device that needs to display the device information is identified according to the user setting, specifically: setting to display all equipment information, wherein in the step 4, all equipment in the current frame image is identified; or setting the device information pointed by the mouse to be displayed, in the step 4, reading the position of the mouse on the current frame image, determining the position of the mouse in the three-dimensional annotation model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional annotation model coordinate system, thereby determining the annotation corresponding to the proxy shape body of the position, and identifying the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area thereof on the current frame image according to the annotation.
The invention has the following beneficial effects:
1. the invention discloses a workshop digital twin-oriented augmented reality system and a workshop digital twin-oriented augmented reality method.
2. According to the workshop digital twin-oriented augmented reality system and method, a complex three-dimensional model of a physical workshop system is not required to be established, video images are used for replacing workshop three-dimensional model display, and the picture display is smoother.
3. The workshop digital twin-oriented augmented reality system and the method can set whether the information of each device is displayed, the type of a display interface, the parameters of the interface and the like, and are more flexible in display mode and simpler and more convenient to operate.
4. The workshop digital twin-oriented augmented reality system and the method can judge the equipment identifier pointed by the mouse and display the related information of the equipment.
Drawings
FIG. 1 is a system block diagram of an augmented reality system for plant digital twinning according to the present invention;
FIG. 2 is a flow chart showing the overall device information for the augmented reality method of the present invention;
fig. 3 is a flowchart showing device information of a device pointed by a mouse according to the augmented reality method of the present invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and to specific embodiments.
Referring to fig. 1 and 2, a workshop digital twin-oriented augmented reality system comprises a physical workshop, a workshop digital twin-oriented system, a camera group, an image acquisition module, a workshop three-dimensional model labeling module, an equipment identification module, a data query and AR registration module and an AR display module. The workshop digital twin system outputs digital twin information, the camera group generally consists of at least two cameras, the cameras are fixed in a physical workshop, and video images of the current state of the physical workshop are collected; the image acquisition module acquires workshop video images shot by a camera selected by a user; the workshop three-dimensional model labeling module constructs a virtual three-dimensional model of a physical workshop through three-dimensional modeling software (such as a multi-gen Creator), wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using agent shape bodies (such as cuboid, sphere, ellipsoid and other basic shapes), each agent shape body represents the spatial shape and position of the equipment in the physical workshop, then labeling is carried out on each agent shape body, a one-to-one correspondence relation between labeling and equipment identification is established, and a workshop three-dimensional labeling model is generated; the device identification module identifies devices needing to display device information according to user settings, specifically obtains a current frame image from the workshop video image, and identifies device identifications corresponding to the devices needing to be identified in the current frame image and imaging areas of the devices on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional annotation model and the devices in the current frame image and the annotations of each agent shape body; the data query and AR registration module: inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information, and determining an information display area of each equipment according to an imaging area of each equipment on a current frame image; the AR display module superimposes the digital twin information of the equipment on the information display area of the equipment on the current frame image, so that AR display of workshop equipment information is realized. Since the current frame image is extracted from the video image, the current frame image is dynamically transformed, and device identification, data query, and AR display are a dynamic loop process.
The information display area may be within the imaging area or may be in the vicinity of the imaging area.
The workshop digital twin system comprises a sensor, edge computing equipment, a bus communication module, a digital twin model system and digital twin information. The sensor and the edge computing equipment are used for detecting state information of workshops, such as machining information of a machine tool body, machine tool task information, logistics information, robot body information and the like, the state information is transmitted to the digital twin model system through the bus communication system, the workshop digital twin model system is a mapping of a physical workshop in a computer and comprises a simulation and prediction model of the physical workshop, and the workshop digital twin model system takes the workshop state information as input to generate digital twin information comprising digital twin simulation prediction information and state information.
The camera group comprises a plurality of cameras arranged in different areas, each camera comprises a lens, an image sensor, a cradle head and an image sensor gesture detection module, the cradle head is used for controlling the direction of the image sensor, and the image sensor gesture detection module is used for detecting the direction of the image sensor.
The conversion relationship between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system is taken as an equivalent relationship, the labeling is taken as a color labeling example, and the equipment identification process is specifically described.
The workshop three-dimensional model labeling module further performs: rendering different colors to the proxy shape bodies corresponding to different devices, and establishing a one-to-one mapping relation between the colors and the device identifications; unifying the coordinate system of the workshop three-dimensional annotation model and the physical workshop coordinate system, enabling the equipment coordinates of the physical workshop to be consistent with the corresponding proxy shape coordinates in the workshop three-dimensional annotation model, and enabling the coordinates of each camera of the physical workshop to be consistent with the corresponding virtual camera coordinates in the workshop three-dimensional annotation model. The image acquisition module acquires a workshop video image shot by a camera selected by a user, a position P (x, y, z) of an image sensor of the camera selected currently and a posture Q (alpha, beta, theta) of the image sensor. The equipment identification module acquires the position P (x, y, z) and the gesture Q (alpha, beta, theta) of the image sensor, and sends the information to the workshop three-dimensional model labeling module; and simultaneously, the equipment identification module reads the current frame image of the workshop video image. The workshop three-dimensional model labeling module sets the position of a virtual image sensor of the virtual camera as P (x, y, z) and the posture as Q (alpha, beta, theta), so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of a physical workshop, a virtual composite image of the workshop three-dimensional labeling model is synthesized according to the camera imaging model, and different colors given on the virtual composite image correspond to different devices; the identification module determines the position corresponding relation between the agent shape body in the virtual synthesized image and the equipment in the current frame image according to the consistency of the imaging model, the imaging position and the imaging gesture of the virtual synthesized image and the current frame image, reads the color value of the pixel according to the equipment to be identified, determines the equipment identification of the equipment corresponding to the color value in the current frame image, and determines the imaging area of the equipment in the current frame image according to the area range of the pixel with the color value.
In this embodiment, the invention further provides a workshop digital twin-oriented augmented reality system, which further comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines various types of display interfaces, such as a dashboard display interface control, a nixie tube display interface control and a virtual oscilloscope display interface control; the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; after the data query and AR registration module obtains the digital twin information of the equipment and determines the information display area of the equipment, the data query and AR registration module sends the type of a display interface, the information display area and information or parameters to be displayed to the AR display module according to the setting of the equipment in the workshop information display setting module; the AR display module acquires a display interface of a corresponding type from the AR information display interface library, and superimposes the display interface on an information display area of the equipment in the current frame image, and information or parameters of the equipment are displayed through the display interface.
In this embodiment, identifying devices that need to display device information according to user settings, including setting by a user to display all device information in a current frame image, identifying all devices in the current frame image, and then acquiring information of all identified devices and displaying the information superimposed on the current frame image; or, the device identification module is configured to display device information of the device according to the device pointed by the user mouse, please refer to fig. 3, when the device is identified, the device identification module firstly reads the position of the mouse on the current frame image, then determines the position of the mouse in the three-dimensional labeling model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, so as to determine the label corresponding to the agent shape of the position, identify the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area thereof on the current frame image according to the label, and then superimpose digital twin information of the device on the information display area of the device on the current frame image.
Example two
Referring to fig. 1 and 2, an augmented reality method facing digital twinning in a workshop includes the following steps: step 1, constructing a virtual three-dimensional model of a physical workshop, wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the spatial shape and position of the equipment in the physical workshop, then labeling each proxy shape body, establishing a one-to-one correspondence between labeling and equipment identification, and generating a workshop three-dimensional labeling model; step 2, fixing a plurality of cameras in a physical workshop, and collecting video images of the current state of the physical workshop through the cameras; step 3, collecting workshop video images shot by a camera selected by a user; step 4, identifying the equipment needing to display the equipment information according to the user setting: acquiring a current frame image from the workshop video image, and identifying equipment identifiers corresponding to equipment to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional annotation model and equipment in the current frame image and the annotation of each agent shape body; step 5, inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information, and determining an information display area of the equipment according to an imaging area of the equipment on the current frame image; and 6, superposing the digital twin information of the equipment on an information display area of the equipment on the current frame image, so as to realize AR display of workshop equipment information.
In the step 2, the camera group includes a plurality of cameras installed in different areas, each camera includes a lens, an image sensor, a pan-tilt and an image sensor gesture detection module, the pan-tilt is used for controlling the azimuth of the image sensor, and the image sensor gesture detection module is used for detecting the direction of the image sensor. In the step 3, the position of the image sensor of the currently selected camera and the gesture of the image sensor are also acquired. In the step 4, firstly, setting the position and the posture of a virtual image sensor of a virtual camera corresponding to a currently selected camera according to the position and the posture of the image sensor and the conversion relation between a physical workshop coordinate system and a workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of a physical workshop, and synthesizing a virtual synthesized image of the workshop three-dimensional labeling model according to the camera imaging model; and then, according to the consistency of the imaging model, the imaging position and the imaging posture of the virtual composite image and the current frame image, determining the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, reading the label of the corresponding agent shape body in the virtual composite image according to the equipment to be identified, which is set by a user, and determining the equipment identification and the imaging area of the equipment in the current frame image according to the label.
In this embodiment, the labels are color labels, the proxy shape bodies corresponding to different devices render different colors, and a one-to-one mapping relationship between the colors and the device identifiers is established; according to the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, the color value of the pixel is read to determine the equipment identification of the equipment corresponding to the color value in the current frame image, and the imaging area of the equipment in the current frame image is determined according to the area range where the pixel of the color value is positioned. For example, the statistical pixel color value is the maximum coordinate value and the minimum coordinate value of the abscissa and the ordinate of all pixels of the color of the device a, and a rectangular area can be defined, and the area can be estimated as the imaging area of the device a.
In this embodiment, a shop information display setting module and an AR information display interface library are further adopted, where the AR display interface library defines multiple types of display interfaces, and the shop information display setting module is configured to set a display interface type of each device parameter and/or digital twin information and information or parameters displayed by the interface; in the step 5, after obtaining the digital twin information of the equipment and determining the information display area of the equipment, reading the setting of the equipment in the workshop information display setting module, and obtaining the type of the display interface and the information or parameters to be displayed; in the step 6, a display interface of a corresponding type is obtained from the AR information display interface library according to the obtained display interface type, the display interface is superimposed on the information display area of the device in the current frame image, and the device information or parameters to be displayed are displayed through the display interface.
The device to be displayed with the device information is identified according to the user setting, specifically: setting to display all equipment information, wherein in the step 4, all equipment in the current frame image is identified;
or setting the device information pointed by the mouse to be displayed, in the step 4, reading the position of the mouse on the current frame image, determining the position of the mouse in the three-dimensional annotation model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional annotation model coordinate system, thereby determining the annotation corresponding to the proxy shape body of the position, and identifying the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area thereof on the current frame image according to the annotation.
Referring to fig. 3, a method for determining a currently selected device and outputting digital twin information of the device by moving a mouse is now provided.
The preparation stage:
step 10, constructing a virtual three-dimensional model of a physical workshop, wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shapes, each proxy shape represents the spatial shape and position of the equipment in the physical workshop, then color labeling is carried out on each proxy shape, the proxy shapes corresponding to different equipment render different colors, a one-to-one mapping relation between the colors and the equipment identifications is established, and a workshop three-dimensional labeling model is generated; the virtual three-dimensional model of the workshop also comprises virtual cameras which are arranged corresponding to the spatial positions of the cameras in the physical workshop; unifying the coordinate system of the three-dimensional annotation model of the workshop and the coordinate system of the physical workshop, so that the equipment coordinate of the physical workshop is consistent with the corresponding proxy shape coordinate in the three-dimensional annotation model of the workshop, and the coordinate of each camera of the physical workshop is consistent with the corresponding virtual camera coordinate in the three-dimensional annotation model of the workshop;
step 20, defining an AR information display interface library, wherein the AR information display interface library comprises an instrument panel display interface control, a nixie tube display interface control and a virtual oscilloscope display interface control;
step 30, defining a workshop information display setting module, and setting the display interface type of each device parameter and/or digital twin information and the information or parameters displayed by the interface, for example, defining parameters of interface controls, such as display range, resolution and the like;
and (3) a running cycle stage:
step 40, acquiring a current frame image of a workshop video image shot by a camera selected by a user, acquiring a position P (x, y, z) of an image sensor of the camera selected currently and an attitude Q (alpha, beta, theta) of the image sensor;
step 50, reading the position P (x, y, z) of the image sensor of the currently selected camera and the posture Q (alpha, beta, theta) of the image sensor; setting the position of a virtual image sensor of a virtual camera as P (x, y, z) and the posture as Q (alpha, beta, theta), enabling a virtual imaging model in a workshop three-dimensional labeling model to be consistent with an imaging model of a physical workshop, and synthesizing a virtual synthesized image of the workshop three-dimensional labeling model according to the camera imaging model, wherein different colors on the virtual synthesized image correspond to different devices; reading coordinates (m, n) of a mouse on a current frame image, reading color values of pixels with the coordinates (m, n) of the pixels from a virtual composite image, and identifying a device identifier corresponding to the pixels on the virtual composite image according to a one-to-one mapping relation between the colors and physical devices, wherein the device identifier is the device pointed by the mouse on the current frame image; simultaneously counting the area where the pixels of the color value are located to obtain an imaging area of the equipment;
step 60, inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information; reading the setting of the equipment in the workshop information display setting module, and acquiring the type of a display interface and information or parameters to be displayed;
step 70, acquiring a display interface of a corresponding type from an AR information display interface library according to the acquired display interface type, determining an information display area of the equipment according to an imaging area of the equipment, and superposing digital twin information of the equipment on the information display area of the equipment on a current frame image so as to realize AR display of workshop equipment information;
step 80, judging whether the program is ended, if not, returning to step 40, if so, ending the exiting program.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (11)

1. The workshop digital twin-oriented augmented reality system comprises a physical workshop and a workshop digital twin system, wherein the workshop digital twin system outputs digital twin information, and the system is characterized by further comprising the following modules:
camera group: the method comprises the steps of fixing the video image in a physical workshop, and collecting a video image of the current state of the physical workshop;
and an image acquisition module: collecting workshop video images shot by a camera selected by a user;
and a workshop three-dimensional model labeling module: constructing a virtual three-dimensional model of a physical workshop, wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shapes, each proxy shape represents the spatial shape and position of the equipment in the physical workshop, then labeling each proxy shape, establishing a one-to-one correspondence between labeling and equipment identification, and generating a workshop three-dimensional labeling model;
and a device identification module: identifying equipment needing to display equipment information according to user setting, specifically, acquiring a current frame image from the workshop video image, and identifying equipment identifiers corresponding to the equipment needing to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional labeling model and the equipment in the current frame image and labeling of each agent shape body;
data query and AR registration module: inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information, and determining an information display area of each equipment according to an imaging area of each equipment on a current frame image;
AR display module: and superposing the digital twin information of the equipment on the information display area of the equipment on the current frame image, thereby realizing AR display of the workshop equipment information.
2. The plant-digital-twinning-oriented augmented reality system of claim 1, wherein: the camera group comprises a plurality of cameras arranged in different areas, each camera comprises a lens, an image sensor, a cradle head and an image sensor gesture detection module, the cradle head is used for controlling the direction of the image sensor, and the image sensor gesture detection module is used for detecting the direction of the image sensor;
the image acquisition module is used for acquiring the position of an image sensor of the currently selected camera and the gesture of the image sensor;
the equipment identification module acquires the position and the gesture of the image sensor and sends the information to the workshop three-dimensional model labeling module;
the workshop three-dimensional model labeling module sets the position and the gesture of a virtual image sensor of the currently selected virtual camera according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of a physical workshop, and a virtual synthesized image of the workshop three-dimensional labeling model is synthesized according to the camera imaging model;
the identification module determines the position corresponding relation between the agent shape body in the virtual synthesized image and the equipment in the current frame image according to the consistency of the imaging model, the imaging position and the imaging gesture of the virtual synthesized image and the current frame image, reads the label of the corresponding agent shape body in the virtual synthesized image according to the equipment to be identified, which is set by a user, and determines the equipment identification and the imaging area of the equipment in the current frame image according to the label.
3. The plant-digital-twinning-oriented augmented reality system according to claim 2, wherein: the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system is an equivalent relation, and specifically: the workshop three-dimensional model labeling module further performs: unifying the coordinate system of the workshop three-dimensional annotation model and the physical workshop coordinate system, enabling the equipment coordinates of the physical workshop to be consistent with the corresponding proxy shape coordinates in the workshop three-dimensional annotation model, and enabling the coordinates of each camera of the physical workshop to be consistent with the corresponding virtual camera coordinates in the workshop three-dimensional annotation model.
4. A plant-digital twinning-oriented augmented reality system according to any one of claims 1 to 3, characterized in that: the labels are color labels, the proxy shape bodies corresponding to different devices render different colors, and a one-to-one mapping relation between the colors and the device identifications is established; according to the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, the color value of the pixel is read to determine the equipment identification of the equipment corresponding to the color value in the current frame image, and the imaging area of the equipment in the current frame image is determined according to the area range where the pixel of the color value is positioned.
5. The plant-digital-twinning-oriented augmented reality system of claim 1, wherein: the device also comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines a plurality of types of display interfaces, and the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; after the data query and AR registration module obtains the digital twin information of the equipment and determines the information display area of the equipment, the data query and AR registration module sends the type of a display interface, the information display area and information or parameters to be displayed to the AR display module according to the setting of the equipment in the workshop information display setting module; the AR display module acquires a display interface of a corresponding type from the AR information display interface library, and superimposes the display interface on an information display area of the equipment in the current frame image, and information or parameters of the equipment are displayed through the display interface.
6. The plant-digital-twinning-oriented augmented reality system of claim 1, wherein: the device to be displayed with the device information is identified according to the user setting, specifically: the method comprises the steps of setting to display all equipment information and identifying all equipment in a current frame image;
or, the device identification module reads the position of the mouse on the current frame image, and determines the position of the mouse in the three-dimensional annotation model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional annotation model coordinate system, so as to determine the annotation corresponding to the proxy shape body at the position, and identify the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area thereof on the current frame image according to the annotation.
7. The workshop digital twinning-oriented augmented reality method is characterized by comprising the following steps of:
step 1, constructing a virtual three-dimensional model of a physical workshop, wherein a virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the spatial shape and position of the equipment in the physical workshop, then labeling each proxy shape body, establishing a one-to-one correspondence between labeling and equipment identification, and generating a workshop three-dimensional labeling model;
step 2, fixing a camera group comprising a plurality of cameras in a physical workshop, and collecting video images of the current state of the physical workshop through the cameras;
step 3, collecting workshop video images shot by a camera selected by a user;
step 4, identifying the equipment needing to display the equipment information according to the user setting: acquiring a current frame image from the workshop video image, and identifying equipment identifiers corresponding to equipment to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional annotation model and equipment in the current frame image and the annotation of each agent shape body;
step 5, inquiring a workshop digital twin system according to the equipment identification to acquire corresponding digital twin information, and determining an information display area of the equipment according to an imaging area of the equipment on the current frame image;
and 6, superposing the digital twin information of the equipment on an information display area of the equipment on the current frame image, so as to realize AR display of workshop equipment information.
8. The plant-digital-twinning-oriented augmented reality method of claim 7, wherein: in the step 2, a plurality of cameras of the camera group are installed in different areas, each camera comprises a lens, an image sensor, a cradle head and an image sensor gesture detection module, the cradle head is used for controlling the azimuth of the image sensor, and the image sensor gesture detection module is used for detecting the direction of the image sensor;
in the step 3, the position of the image sensor of the currently selected camera and the gesture of the image sensor are also collected;
in the step 4, firstly, setting the position and the posture of a virtual image sensor of a virtual camera corresponding to a currently selected camera according to the position and the posture of the image sensor and the conversion relation between a physical workshop coordinate system and a workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of a physical workshop, and synthesizing a virtual synthesized image of the workshop three-dimensional labeling model according to the camera imaging model; and then, according to the consistency of the imaging model, the imaging position and the imaging posture of the virtual composite image and the current frame image, determining the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, reading the label of the corresponding agent shape body in the virtual composite image according to the equipment to be identified, which is set by a user, and determining the equipment identification and the imaging area of the equipment in the current frame image according to the label.
9. The workshop digital twinning-oriented augmented reality method according to claim 7 or 8, characterized in that: the labels are color labels, the proxy shape bodies corresponding to different devices render different colors, and a one-to-one mapping relation between the colors and the device identifications is established; according to the position corresponding relation between the agent shape body in the virtual composite image and the equipment in the current frame image, the color value of the pixel is read to determine the equipment identification of the equipment corresponding to the color value in the current frame image, and the imaging area of the equipment in the current frame image is determined according to the area range where the pixel of the color value is positioned.
10. The plant-digital-twinning-oriented augmented reality method of claim 7, wherein: the device also comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines a plurality of types of display interfaces, and the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; in the step 5, after obtaining the digital twin information of the equipment and determining the information display area of the equipment, reading the setting of the equipment in the workshop information display setting module, and obtaining the type of the display interface and the information or parameters to be displayed; in the step 6, a display interface of a corresponding type is obtained from the AR information display interface library according to the obtained display interface type, the display interface is superimposed on the information display area of the device in the current frame image, and the device information or parameters to be displayed are displayed through the display interface.
11. The plant-digital-twinning-oriented augmented reality method of claim 7, wherein: the device to be displayed with the device information is identified according to the user setting, specifically: setting to display all equipment information, wherein in the step 4, all equipment in the current frame image is identified; or setting the device information pointed by the mouse to be displayed, in the step 4, reading the position of the mouse on the current frame image, determining the position of the mouse in the three-dimensional annotation model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional annotation model coordinate system, thereby determining the annotation corresponding to the proxy shape body of the position, and identifying the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area thereof on the current frame image according to the annotation.
CN201911352218.2A 2019-12-25 2019-12-25 Workshop digital twinning-oriented augmented reality system and method Active CN111091611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911352218.2A CN111091611B (en) 2019-12-25 2019-12-25 Workshop digital twinning-oriented augmented reality system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911352218.2A CN111091611B (en) 2019-12-25 2019-12-25 Workshop digital twinning-oriented augmented reality system and method

Publications (2)

Publication Number Publication Date
CN111091611A CN111091611A (en) 2020-05-01
CN111091611B true CN111091611B (en) 2023-05-26

Family

ID=70397137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911352218.2A Active CN111091611B (en) 2019-12-25 2019-12-25 Workshop digital twinning-oriented augmented reality system and method

Country Status (1)

Country Link
CN (1) CN111091611B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111596604B (en) * 2020-06-12 2022-07-26 中国科学院重庆绿色智能技术研究院 Intelligent fault diagnosis and self-healing control system and method for engineering equipment based on digital twinning
CN111627262A (en) * 2020-06-12 2020-09-04 上海商汤智能科技有限公司 Sand table display system, method, computer equipment and storage medium
CN111857520A (en) * 2020-06-16 2020-10-30 广东希睿数字科技有限公司 3D visual interactive display method and system based on digital twins
CN111833426B (en) * 2020-07-23 2022-09-02 四川长虹电器股份有限公司 Three-dimensional visualization method based on digital twinning
WO2022040920A1 (en) * 2020-08-25 2022-03-03 南京翱翔智能制造科技有限公司 Digital-twin-based ar interactive system and method
CN111966068A (en) * 2020-08-27 2020-11-20 上海电机系统节能工程技术研究中心有限公司 Augmented reality monitoring method and device for motor production line, electronic equipment and storage medium
CN112150507B (en) * 2020-09-29 2024-02-02 厦门汇利伟业科技有限公司 3D model synchronous reproduction method and system for object posture and displacement
CN114157826B (en) * 2022-02-07 2022-08-02 西安塔力科技有限公司 Cooperative operation method based on digital twin body
CN115077488B (en) * 2022-05-26 2023-04-28 燕山大学 Factory personnel real-time positioning and monitoring system and method based on digital twinning
CN116543134B (en) * 2023-07-06 2023-09-15 金锐同创(北京)科技股份有限公司 Method, device, computer equipment and medium for constructing digital twin model
CN117156108B (en) * 2023-10-31 2024-03-15 中海物业管理有限公司 Enhanced display system and method for machine room equipment monitoring picture

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359507A (en) * 2018-08-24 2019-02-19 南京理工大学 A kind of twin body Model fast construction method of plant personnel number
CN109819233A (en) * 2019-01-21 2019-05-28 哈工大机器人(合肥)国际创新研究院 A kind of digital twinned system based on virtual image technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016179248A1 (en) * 2015-05-05 2016-11-10 Ptc Inc. Augmented reality system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359507A (en) * 2018-08-24 2019-02-19 南京理工大学 A kind of twin body Model fast construction method of plant personnel number
CN109819233A (en) * 2019-01-21 2019-05-28 哈工大机器人(合肥)国际创新研究院 A kind of digital twinned system based on virtual image technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数字孪生车间信息物理融合理论与技术;陶飞;程颖;程江峰;张萌;徐文君;戚庆林;计算机集成制造系统;第23卷(第8期);全文 *

Also Published As

Publication number Publication date
CN111091611A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
CN111091611B (en) Workshop digital twinning-oriented augmented reality system and method
JP6551184B2 (en) Simulation apparatus, simulation method, and simulation program
Rose et al. Annotating real-world objects using augmented reality
US20120124509A1 (en) Information processor, processing method and program
CN111062873A (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
JP3391405B2 (en) Object identification method in camera image
JP2014167786A (en) Automated frame-of-reference calibration for augmented reality
CN110569849B (en) AR (augmented reality) -glasses-based multi-instrument simultaneous identification and spatial positioning method and system
KR102566300B1 (en) Method for indoor localization and electronic device
CN108430032B (en) Method and equipment for realizing position sharing of VR/AR equipment
CN116127821A (en) Three-dimensional visual presentation method and platform for operation and maintenance data
CN115731170A (en) Mobile projection type assembly process guiding method and system
US10474124B2 (en) Image processing system, image processing device, method of reconfiguring circuit in FPGA, and program for reconfiguring circuit in FPGA
Kiswanto et al. Development of augmented reality (AR) for machining simulation of 3-axis CNC milling
CN111311728B (en) High-precision morphology reconstruction method, equipment and device based on optical flow method
Scheuermann et al. Mobile augmented reality based annotation system: A cyber-physical human system
CN109740703B (en) Intelligent display system of multimedia digital platform
CN115982824A (en) Construction site worker space management method and device, electronic equipment and storage medium
CN112150507B (en) 3D model synchronous reproduction method and system for object posture and displacement
CN209746614U (en) Simulation interaction visualization system of virtual robot workstation
Huang et al. Design and application of intelligent patrol system based on virtual reality
CN110689625B (en) Automatic generation method and device for customized face mixed expression model
BARON et al. APPLICATION OF AUGMENTED REALITY TOOLS TO THE DESIGN PREPARATION OF PRODUCTION.
CN113639639A (en) Data processing method and device for position data and storage medium
CN114693749A (en) Method and system for associating different physical coordinate systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant