CN116778119B - Man-machine cooperative assembly system based on augmented reality - Google Patents

Man-machine cooperative assembly system based on augmented reality Download PDF

Info

Publication number
CN116778119B
CN116778119B CN202310755410.6A CN202310755410A CN116778119B CN 116778119 B CN116778119 B CN 116778119B CN 202310755410 A CN202310755410 A CN 202310755410A CN 116778119 B CN116778119 B CN 116778119B
Authority
CN
China
Prior art keywords
assembly
information
type
real
collaboration server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310755410.6A
Other languages
Chinese (zh)
Other versions
CN116778119A (en
Inventor
刘家东
李宇
费博文
沈新起
刘棣斐
夏彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Information and Communications Technology CAICT
Original Assignee
China Academy of Information and Communications Technology CAICT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Information and Communications Technology CAICT filed Critical China Academy of Information and Communications Technology CAICT
Priority to CN202310755410.6A priority Critical patent/CN116778119B/en
Publication of CN116778119A publication Critical patent/CN116778119A/en
Application granted granted Critical
Publication of CN116778119B publication Critical patent/CN116778119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of intelligent assembly, in particular to a man-machine cooperative assembly system based on augmented reality, which comprises an environment processing end, AR (augmented reality) glasses and a cooperative server. According to the invention, the real environment of the man-machine cooperation assembly table is processed through the setting environment processing end, real environment information is generated and projected in the AR glasses according to the real-time assembly image of the man-machine cooperation assembly table range acquired by the image pickup device of the AR glasses, virtual-real combination of AR is realized, man-machine cooperation is easier to be carried out by assembly operators, fusion projection is carried out on the real environment information projected in the AR glasses through the setting cooperation server, assembly description information can be projected in the AR glasses in real time, and virtual directions of color marking, assembly type sequence, assembly type position and error prompt are carried out according to the assembly description information and projected in the AR glasses, so that the assembly recognition accuracy is improved, and the man-machine cooperation assembly effect is ensured.

Description

Man-machine cooperative assembly system based on augmented reality
Technical Field
The invention relates to the technical field of intelligent assembly, in particular to a man-machine cooperative assembly system based on augmented reality.
Background
In the traditional manufacturing enterprises, a great deal of time is spent in the product assembly link, and the quality of the product assembly directly influences the performance of the product, so the product assembly is a very important production activity; in the assembly process, assembly staff can face a plurality of difficulties, such as a great deal of experience and skill required in the assembly process, a great deal of assembly steps, excessive fatigue of staff caused by high assembly repeatability and the like, and at present, the product assembly technology mainly comprises two typical forms, namely a training paper operation instruction book and a training electronic operation instruction book; firstly, training paper-added operation instruction manual forms, wherein the training mainly comprises theoretical basic knowledge, process flow, safety specification, practical operation training and the like, staff can roughly know the work to be performed through the training, various skill training can be continuously performed during working, the staff can be better helped to continuously promote the capability and the assembly experience, the paper-based operation instruction manual is a manual file of the staff to perform the assembly work, the information such as the assembly process flow, the assembly step, the assembly drawing, a product bill of materials and the like is recorded in the file in detail, the assembly work can be easily completed according to the instruction manual for the staff with abundant experience, but the complicated description and the two-dimensional drawing can bring great difficulty to new staff, missing and wrong assembly parts can be possibly caused, even the parts can be damaged, and unnecessary loss is caused; compared with paper specifications, the electronic operation instruction can save a large amount of paper, consumables and printers, play multimedia data such as pictures, words, appearance l, PPT and high-definition videos, update and release more flexibly, generally fix an electronic SOP terminal on a workbench, facilitate assembly personnel to check, and the electronic SOP is widely applied to industries such as electric industry, electronic industry, automobile and mobile phone.
The conventional assembly at present has various problems, such as high labor cost and low efficiency due to one-to-one learning of a master belt free training mode in an industrial field, difficulty in understanding an assembly instruction by an assembler depending on an assembly drawing or video training mode in industrial assembly, easiness in causing problems of misloading, missed loading and the like, and in the product assembly process, the assembler cannot get rid of path dependence, needs long-term accumulated experience to rapidly and accurately complete tasks, and when heavy objects and dangerous objects need to be carried on the field and a large number of repeated and boring tasks need to be carried out, safety accidents easily occur, the assembler needs to hold the instruction in the assembly learning process, operation cannot be realized, operation cannot be carried out in a special environment, operation information of each step of the assembler cannot be recorded, and the operation data cannot be analyzed, so that the intelligent assembly with manpower and the man-machine is unfavorable for improving the skill of the assembler, and the intelligent assembly in cooperation becomes the main development direction of the current assembly operation.
Chinese patent publication No.: CN110744549a discloses an intelligent assembly process based on man-machine cooperation; the technical point is that the assembly action is completed by the mechanical arm through gesture recognition and voice recognition processing of information sent by operators, so that the problem that in the existing man-machine cooperative assembly operation, the lack of intelligent equipment is low in recognition precision according to the feedback process of the operators according to the assembly working conditions, the assembly precision and the assembly efficiency of man-machine cooperative assembly are difficult to improve, and the problem that the assembly risk of large heavy objects is high is caused due to the lack of corresponding correction feedback of an actual operation end.
Disclosure of Invention
Therefore, the invention provides a man-machine cooperative assembly system based on augmented reality, which is used for solving the problem of poor assembly effect caused by low recognition precision of man-machine cooperative assembly in the prior art.
In order to achieve the above object, the present invention provides a human-computer collaborative assembling system based on augmented reality, comprising,
the environment processing end is used for carrying out environment scanning on the real environment of the man-machine cooperation assembly station to generate initial environment information, and can also convert the real-time assembly image into real-time assembly information and match the real-time assembly information with the initial environment information to generate real environment information;
the AR glasses are connected with the environment processing end and comprise a camera device and a display device, wherein the camera device is used for acquiring real-time assembly images of the range of the man-machine cooperation assembly table, and the display device is used for projecting real environment information generated by the environment processing end;
the collaboration server is respectively connected with the environment processing end and the AR glasses, can acquire initial environment information generated by the environment processing end and convert the initial environment information into a plurality of initial environment images, can identify the types of assembly parts in the initial environment images, performs color marking and position marking according to the types of the assembly parts, converts the assembly part images with the color marking into real assembly part information, and fuses the real assembly part information and the real environment information into a display device of the AR glasses for projection; the collaboration server can project assembly description information into the display device, and judge the assembly type sequence and the assembly type position of each assembly part in the real environment information according to the assembly description information so as to determine whether to project error prompts on the AR glasses.
Further, a fitting type database is arranged in the collaboration server, shape information, color information and size information of each fitting type are stored in the fitting type database, when the collaboration server identifies the type of any fitting in an initial environment image, real-time shape information of the fitting is obtained, the real-time shape information and the shape information of each fitting type stored in the fitting type database are sequentially calculated to obtain a graph similarity St, the graph similarity St of the shape information of any fitting type and the real-time shape information is judged according to a standard similarity Sb set in the collaboration server,
when St is less than Sb, the collaboration server judges that the assembly type is invalid matching, and the assembly type is not selected;
when St is larger than or equal to Sb, the collaboration server judges that the assembly type is valid matching, and marks the assembly type as a matching type.
Further, when the collaboration server completes the determination of the graphic similarity St for all the fitting types stored in the fitting type database, the collaboration server determines the number of matching types,
if the number of the matching types of the marks is zero, the collaboration server selects any initial environment image except the initial environment image which is currently identified from a plurality of initial environment images converted by the initial environment information to identify the type of the assembly until the number of the matching types of the marks of the assembly type is not zero;
if the number of the matched types of the marks is one, the collaboration server judges that the type of the assembly part is the matched type of the marks, the assembly part is subjected to color marking according to the color information of the matched type in the assembly description information, and the position information of the assembly part is acquired from the initial environment information to carry out position marking;
if the number of the matching types of the marks is greater than one, the collaboration server judges the size information of each matching type of the marks so as to mark the assembly with colors.
Further, when the number of the marked matching types is greater than one, the collaboration server acquires image area information of the assembly in an initial environment image, acquires position information of the assembly in the initial environment image, calculates a relative scanning distance, calculates actual size information of the assembly according to the relative scanning distance and the image area information, sequentially makes differences between the actual size information and the size information of each marked matching type, and arranges the absolute value of the difference between the size information of each marked matching type and the actual size information from small to large, selects the matching type with the smallest absolute value of the difference between the size information and the actual size information in the arrangement as the type of the assembly, marks the assembly with color according to the color information of the matching type in assembly description information, and acquires the position information of the assembly in the initial environment information for position marking.
Further, the collaboration server can analyze the assembly description information, acquire the assembly type sequence and the assembly type position of each assembly part type, when any assembly part in the real environment information is assembled, the camera device acquires the color of the assembly part mark, determines the type of the assembly part according to the color of the assembly part mark, judges the type of the assembly part according to the preset assembly sequence,
if the type of the assembly part is the current assembly type in the assembly type sequence, the collaboration server acquires the assembly type position corresponding to the current assembly type from the assembly description information to judge so as to determine whether the assembly part is assembled;
if the type of the assembly part is not the current assembly type in the assembly type sequence, the collaboration server projects error prompts in the display device, selects the corresponding assembly part in the real environment information according to the current assembly type to create virtual guide, and fuses the virtual guide in the real environment information to project to the display device.
Further, a standard assembly offset distance Lb is arranged in the collaboration server, when the type of the assembly is the current assembly type in the assembly type sequence, the collaboration server obtains an assembly type position corresponding to the current assembly type in assembly description information, and obtains a real-time assembly position of the assembly in real environment information, the collaboration server calculates the real-time assembly offset distance Ls according to the real-time assembly position and the assembly type position, compares the real-time assembly offset distance Ls with the standard assembly offset distance Lb,
when Ls is less than or equal to Lb, the collaboration server judges that the real-time assembly offset distance does not exceed the standard assembly offset distance, the collaboration server projects assembly in the display device, selects a corresponding assembly part in real environment information according to the next assembly type in the assembly type sequence to create a virtual guide, and fuses the virtual guide in the real environment information to project to the display device;
when Ls > Lb, the collaboration server determines that the real-time assembly offset distance has exceeded the standard assembly offset distance, and the collaboration server will make a projection of an error prompt in the display device.
Further, the AR glasses are internally provided with a voice device for receiving voice instructions, the collaboration server is internally provided with a rapid execution instruction character number, the collaboration server can identify the voice instructions, acquire the real-time instruction character number of the voice instructions, judge the real-time instruction character number according to the rapid execution instruction character number,
if the number of the real-time instruction characters does not exceed the number of the quick execution instruction characters, extracting keywords from the voice instruction by the collaboration server, and executing actions according to the extracted keywords;
if the number of the real-time instruction characters exceeds the number of the quick execution instruction characters, the collaboration server extracts and matches instruction words of the voice instruction to determine the execution type of the voice instruction.
Further, a first instruction word bank and a second instruction word bank are arranged in the collaboration server, when the real-time instruction character number exceeds the quick execution instruction character number, the collaboration server extracts instruction words from the voice instruction to obtain voice instruction words, matches the voice instruction words with the first instruction word bank and the second instruction word bank,
if the voice command word is matched with the first command word library, the collaboration server judges that the execution type of the voice command is grammar recognition, carries out semantic understanding on the voice command, and controls and assists the robot to execute actions according to the semantic understanding result;
if the voice command word is matched with the second command word library, the collaboration server judges that the execution type of the voice command is dictation recognition, and converts the voice command into characters for input;
if the voice command word is not matched with the first command word stock and the second command word stock, the collaboration server judges that the voice command is an invalid command, and the collaboration server projects error prompts in the display device;
wherein, each instruction word in the first instruction word bank and each instruction word in the second instruction word bank have no repetition.
Further, directly accessing a Vufor ia SDK in the development of the AR glasses end, sleeving ARFoundat ion SDK on an ARkit in the development of the environment processing end, sleeving the Vufor ia SDK on an ARFoundation, scanning the real environment of the man-machine cooperation assembly table into 3D Mesh information by utilizing the Areatarget capability of the Vufor ia and storing the 3D Mesh information, importing the 3D Mesh into a Unity 3D engine to form initial environment information, scanning the field environment again by utilizing the equipment bottom capability of the ARkit and SLAM, matching the calculated real-time assembly information with the initial environment information, generating real environment information, and projecting the real environment information in a display device of the AR glasses to finish projection of the real environment information.
Further, when the collaboration server calculates the figure similarity St of the shape information of any assembly part type, the collaboration server acquires the real-time shape information of the assembly part in an initial environment image, intercepts the real-time image of the assembly part, zooms the real-time image of the assembly part to ensure that the real-time image of the assembly part is equal to the image area of the shape information image of the assembly part type stored in the assembly part type database, and overlaps the two images to calculate the figure similarity St;
where st=mc/Md, where Mc is the image area where the scaled assembly real-time image and the shape information image overlap, and Md is the image area of the shape information image.
Compared with the prior art, the invention has the beneficial effects that the real environment of the man-machine cooperation assembly table is processed through the setting environment processing end, real environment information is generated and projected in the AR glasses according to the real-time assembly image of the man-machine cooperation assembly table range acquired by the image pickup device of the AR glasses, virtual-real combination of AR is realized, so that assembly operators can more easily perform man-machine cooperation, the real environment information projected in the AR glasses is fused and projected through the setting cooperation server, assembly description information can be projected in the AR glasses in real time, and virtual directions of color marking, assembly type sequence, assembly type position and error prompt are projected in the AR glasses according to the assembly description information, thereby improving the assembly recognition accuracy and guaranteeing the man-machine cooperation assembly effect.
Further, the augmented reality technology provides an assembling environment combining virtual and real for an assembler, so that the assembler can acquire information required in the assembling operation process in time, all the information is displayed in an operation space in a holographic manner, and the memory and cognitive burden of the assembler is reduced.
Especially, use AR glasses, the assembly personnel need not put down the instrument and go to turn over and see paper document or electronic specification, has practiced thrift the time of switching back and forth, has improved assembly efficiency, uses AR glasses control to assist the robot, not only can help the assembly personnel to accomplish the transport of heavy object, dangerous goods, can also accomplish repeated and boring task, leaves more time for the assembly personnel to handle more complicated work to the daily experience of assembly personnel has been improved.
Further, once the problems of incorrect assembly, missing assembly and the like occur to an assembler, the AR glasses can give an alarm in time and prompt the assembler to correct, otherwise, the incorrectly assembled product cannot enter the next installation procedure, and compared with the traditional paper or electronic assembly instruction manual, all operations of the assembler can be recorded, and a work report can be generated by integrating and analyzing data, so that the assembler can be helped to improve and lift better.
Furthermore, an assembler wearing the AR glasses can interact with the system through voice, the voice input needs to be operated, corresponding operation steps and 3D animation can be displayed on the glasses, and compared with gesture operation, the method is flexible and convenient.
Drawings
Fig. 1 is a schematic diagram of a human-computer collaborative assembling system based on augmented reality according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1, which is a schematic diagram of a human-computer cooperative assembly system based on augmented reality according to an embodiment of the present invention, the embodiment discloses a human-computer cooperative assembly system based on augmented reality, including,
the environment processing end is used for carrying out environment scanning on the real environment of the man-machine cooperation assembly station to generate initial environment information, and can also convert the real-time assembly image into real-time assembly information and match the real-time assembly information with the initial environment information to generate real environment information;
the AR glasses are connected with the environment processing end and comprise a camera device and a display device, wherein the camera device is used for acquiring real-time assembly images of the range of the man-machine cooperation assembly table, and the display device is used for projecting real environment information generated by the environment processing end;
the collaboration server is respectively connected with the environment processing end and the AR glasses, can acquire initial environment information generated by the environment processing end and convert the initial environment information into a plurality of initial environment images, can identify the types of assembly parts in the initial environment images, performs color marking and position marking according to the types of the assembly parts, converts the assembly part images with the color marking into real assembly part information, and fuses the real assembly part information and the real environment information into a display device of the AR glasses for projection; the collaboration server can project assembly description information into the display device, and judge the assembly type sequence and the assembly type position of each assembly part in the real environment information according to the assembly description information so as to determine whether to project error prompts on the AR glasses.
In the present embodiment of the present invention, in the present embodiment,
the configuration of AR glasses comprises CPU Qua lcomm Snapdragon, memory 4GB, memory 64GB, LPDDR4x system DRAM, inertial Measurement Unit (IMU), accelerometer, gyroscope, magnetometer, camera 8-MP still image, 1080p30 video.
Collaboration server configuration, CPU:silver plate 4210R 2.4G,10C/20T,9.6GT/s,13.75M cache, turbo, HT, memory: 16GB 3200MHz DDR4 ECC RDIMM memory, hard disk: 480G SATA protocol solid state disk.
Assist robot configuration, degree of freedom: 6, arm exhibition: 924.5mm, load: 5Kg, weight: 24Kg, positioning accuracy: + -0.02 mm, power: 200W, ambient temperature: 0-45 ℃, protection grade: i P54, 54.
Environment processing end configuration, screen size: 11 inches, resolution: 2388x 1668, storage capacity: 128G, network connection: wireless local area network 802.11ax, synchronous dual frequency (2.4 GHz and 5 GHz), bluetooth 5.0 technology.
The real environment of the man-machine cooperation assembly table is processed through the setting environment processing end, real environment information is generated and projected in the AR glasses according to real-time assembly images of the range of the man-machine cooperation assembly table, virtual-real combination of AR is achieved, assembly operators can more easily perform man-machine cooperation, the real environment information projected in the AR glasses is fused and projected through the setting cooperation server, assembly description information can be projected in the AR glasses in real time, and virtual guidance of color marking, assembly type sequence, assembly type position and error prompt is carried out according to the assembly description information, all the virtual guidance is projected in the AR glasses, the recognition accuracy of assembly is improved, and the man-machine cooperation assembly effect is guaranteed.
In the embodiment, all operations of the assembly staff are recorded, information is fed back to the collaboration server, and data are integrated and analyzed through an algorithm, so that a work report can be generated finally, and the assembly staff can be better improved and promoted.
Specifically, a fitting type database is provided in the collaboration server, the fitting type database stores shape information, color information and size information of each fitting type, when the collaboration server identifies the type of any fitting in the initial environment image, the collaboration server acquires real-time shape information of the fitting, sequentially calculates the pattern similarity St of the real-time shape information and the shape information of each fitting type stored in the fitting type database, determines the pattern similarity St of the shape information of any fitting type and the real-time shape information according to the standard similarity Sb set in the collaboration server,
when St is less than Sb, the collaboration server judges that the assembly type is invalid matching, and the assembly type is not selected;
when St is larger than or equal to Sb, the collaboration server judges that the assembly type is valid matching, and marks the assembly type as a matching type.
Specifically, when the collaboration server completes the determination of the pattern similarity St for all fitting types stored in the fitting type database, the collaboration server determines the number of matching types,
if the number of the matching types of the marks is zero, the collaboration server selects any initial environment image except the initial environment image which is currently identified from a plurality of initial environment images converted by the initial environment information to identify the type of the assembly until the number of the matching types of the marks of the assembly type is not zero;
if the number of the matched types of the marks is one, the collaboration server judges that the type of the assembly part is the matched type of the marks, the assembly part is subjected to color marking according to the color information of the matched type in the assembly description information, and the position information of the assembly part is acquired from the initial environment information to carry out position marking;
if the number of the matching types of the marks is greater than one, the collaboration server judges the size information of each matching type of the marks so as to mark the assembly with colors.
Specifically, when the number of the marked matching types is greater than one, the collaboration server acquires image area information of the assembly in an initial environment image, acquires position information of the assembly in the initial environment image, calculates a relative scanning distance, calculates actual size information of the assembly according to the relative scanning distance and the image area information, sequentially makes differences between the actual size information and the size information of each marked matching type, and arranges the absolute value of the difference between the size information of each marked matching type and the actual size information from small to large, selects the matching type with the smallest absolute value of the difference between the size information and the actual size information in the arrangement as the type of the assembly, marks the color of the assembly according to the color information of the matching type in assembly description information, and acquires the position information of the assembly in the initial environment information to mark the position.
Specifically, the collaboration server can analyze the assembly description information, acquire the assembly type sequence and the assembly type position of each assembly part type, when any assembly part in the real environment information is assembled, the camera device acquires the color of the assembly part mark, determines the type of the assembly part according to the color of the assembly part mark, judges the type of the assembly part according to the preset assembly sequence,
if the type of the assembly part is the current assembly type in the assembly type sequence, the collaboration server acquires the assembly type position corresponding to the current assembly type from the assembly description information to judge so as to determine whether the assembly part is assembled;
if the type of the assembly part is not the current assembly type in the assembly type sequence, the collaboration server projects error prompts in the display device, selects the corresponding assembly part in the real environment information according to the current assembly type to create virtual guide, and fuses the virtual guide in the real environment information to project to the display device.
Specifically, a standard assembly offset distance Lb is arranged in the collaboration server, when the type of the assembly is the current assembly type in the assembly type sequence, the collaboration server obtains an assembly type position corresponding to the current assembly type in assembly description information, and obtains a real-time assembly position of the assembly in real environment information, the collaboration server calculates a real-time assembly offset distance Ls according to the real-time assembly position and the assembly type position, and compares the real-time assembly offset distance Ls with the standard assembly offset distance Lb,
when Ls is less than or equal to Lb, the collaboration server judges that the real-time assembly offset distance does not exceed the standard assembly offset distance, the collaboration server projects assembly in the display device, selects a corresponding assembly part in real environment information according to the next assembly type in the assembly type sequence to create a virtual guide, and fuses the virtual guide in the real environment information to project to the display device;
when Ls > Lb, the collaboration server determines that the real-time assembly offset distance has exceeded the standard assembly offset distance, and the collaboration server will make a projection of an error prompt in the display device.
Specifically, the AR glasses are also provided with a voice device for receiving voice instructions, the collaboration server is internally provided with a rapid execution instruction character number, the collaboration server can recognize the voice instructions, acquire the real-time instruction character number of the voice instructions, judge the real-time instruction character number according to the rapid execution instruction character number,
if the number of the real-time instruction characters does not exceed the number of the quick execution instruction characters, extracting keywords from the voice instruction by the collaboration server, and executing actions according to the extracted keywords;
if the number of the real-time instruction characters exceeds the number of the quick execution instruction characters, the collaboration server extracts and matches instruction words of the voice instruction to determine the execution type of the voice instruction.
Specifically, a first instruction word bank and a second instruction word bank are arranged in the collaboration server, when the number of real-time instruction characters exceeds the number of quick execution instruction characters, the collaboration server extracts instruction words from voice instructions to obtain voice instruction words, matches the voice instruction words with the first instruction word bank and the second instruction word bank,
if the voice command word is matched with the first command word library, the collaboration server judges that the execution type of the voice command is grammar recognition, carries out semantic understanding on the voice command, and controls and assists the robot to execute actions according to the semantic understanding result;
if the voice command word is matched with the second command word library, the collaboration server judges that the execution type of the voice command is dictation recognition, and converts the voice command into characters for input;
if the voice command word is not matched with the first command word stock and the second command word stock, the collaboration server judges that the voice command is an invalid command, and the collaboration server projects error prompts in the display device;
wherein, each instruction word in the first instruction word bank and each instruction word in the second instruction word bank have no repetition.
Specifically, in the development of the AR glasses end, directly accessing Vufor ia SDK, in the development of the environment processing end, sleeving ARFoundat ion SDK on an ARKit, sleeving Vufor ia SDK on an arfoundation, scanning the real environment of the man-machine cooperation assembly table into 3D Mesh information by using the area target capability of Vufor ia and storing the 3D Mesh information, importing the 3D Mesh into a Un-performance 3D engine to form initial environment information, scanning the field environment again by using the equipment bottom capability of ARKit and SLAM, matching the calculated real-time assembly information with the initial environment information, generating real environment information, and projecting the real environment information in a display device of AR glasses to complete the projection of the real environment information.
Specifically, when the collaboration server calculates the graphic similarity St of shape information of any assembly part type, the collaboration server acquires assembly part real-time shape information in an initial environment image, intercepts an assembly part real-time image, zooms the assembly part real-time image to ensure that the assembly part real-time image is equal to the image area of the assembly part type shape information image stored in an assembly part type database, and overlaps the two images to calculate the graphic similarity St;
where st=mc/Md, where Mc is the image area where the scaled assembly real-time image and the shape information image overlap, and Md is the image area of the shape information image.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. An augmented reality-based man-machine cooperative assembly system is characterized by comprising,
the environment processing end is used for carrying out environment scanning on the real environment of the man-machine cooperation assembly station to generate initial environment information, and can also convert the real-time assembly image into real-time assembly information and match the real-time assembly information with the initial environment information to generate real environment information;
the AR glasses are connected with the environment processing end and comprise a camera device and a display device, wherein the camera device is used for acquiring real-time assembly images of the range of the man-machine cooperation assembly table, and the display device is used for projecting real environment information generated by the environment processing end;
the collaboration server is respectively connected with the environment processing end and the AR glasses, can acquire initial environment information generated by the environment processing end and convert the initial environment information into a plurality of initial environment images, can identify the types of assembly parts in the initial environment images, performs color marking and position marking according to the types of the assembly parts, converts the assembly part images with the color marking into real assembly part information, and fuses the real assembly part information and the real environment information into a display device of the AR glasses for projection; the collaboration server can project assembly description information into the display device, and judge the assembly type sequence and the assembly type position of each assembly part in the real environment information according to the assembly description information so as to determine whether to project error prompts on the AR glasses;
the cooperation server is provided with a fitting type database, the fitting type database stores shape information, color information and size information of each fitting type, when the cooperation server identifies the type of any fitting in an initial environment image, the cooperation server acquires real-time shape information of the fitting, sequentially calculates the graph similarity St of the real-time shape information and the shape information of each fitting type stored in the fitting type database, judges the graph similarity St of the shape information of any fitting type and the real-time shape information according to the standard similarity Sb set in the cooperation server,
when St is less than Sb, the collaboration server judges that the assembly type is invalid matching, and the assembly type is not selected;
when St is more than or equal to Sb, the collaboration server judges that the assembly type is effective matching, and marks the assembly type as a matching type;
when the collaboration server completes the determination of the graphic similarity St for all fitting types stored in the fitting type database, the collaboration server determines the number of matching types,
if the number of the matching types of the marks is zero, the collaboration server selects any initial environment image except the initial environment image which is currently identified from a plurality of initial environment images converted by the initial environment information to identify the type of the assembly until the number of the matching types of the marks of the assembly type is not zero;
if the number of the matched types of the marks is one, the collaboration server judges that the type of the assembly part is the matched type of the marks, the assembly part is subjected to color marking according to the color information of the matched type in the assembly description information, and the position information of the assembly part is acquired from the initial environment information to carry out position marking;
if the number of the marked matching types is greater than one, the collaboration server judges the size information of each marked matching type so as to mark the assembly with colors;
the collaboration server obtains image area information of the assembly part in an initial environment image when the number of the marked matching types is larger than one, obtains position information of the assembly part in the initial environment image, calculates relative scanning distance, calculates actual size information of the assembly part according to the relative scanning distance and the image area information, sequentially makes difference between the actual size information and the size information of each marked matching type, and arranges the absolute value of the difference value between the size information of each marked matching type and the actual size information from small to large, selects the matching type with the smallest absolute value of the difference value between the size information and the actual size information in the arrangement as the type of the assembly part, marks the color of the assembly part according to the color information of the matching type in assembly description information, and obtains the position information of the assembly part in the initial environment information to mark the position;
the collaboration server can analyze the assembly description information to obtain the assembly type sequence and the assembly type position of each assembly part type, when any assembly part in the real environment information is assembled, the camera device obtains the color of the assembly part mark, determines the type of the assembly part according to the color of the assembly part mark, judges the type of the assembly part according to the preset assembly sequence,
if the type of the assembly part is the current assembly type in the assembly type sequence, the collaboration server acquires the assembly type position corresponding to the current assembly type from the assembly description information to judge so as to determine whether the assembly part is assembled;
if the type of the assembly part is not the current assembly type in the assembly type sequence, the collaboration server projects error prompts in the display device, selects corresponding assembly parts in the real environment information according to the current assembly type to create virtual guide, and fuses the virtual guide in the real environment information to project to the display device;
the cooperation server is internally provided with a standard assembly offset distance Lb, when the type of the assembly is the current assembly type in the assembly type sequence, the cooperation server acquires the assembly type position corresponding to the current assembly type in the assembly description information, acquires the real-time assembly position of the assembly in the real environment information, calculates the real-time assembly offset distance Ls according to the real-time assembly position and the assembly type position, compares the real-time assembly offset distance Ls with the standard assembly offset distance Lb,
when Ls is less than or equal to Lb, the collaboration server judges that the real-time assembly offset distance does not exceed the standard assembly offset distance, the collaboration server projects assembly in the display device, selects a corresponding assembly part in real environment information according to the next assembly type in the assembly type sequence to create a virtual guide, and fuses the virtual guide in the real environment information to project to the display device;
when Ls > Lb, the collaboration server determines that the real-time assembly offset distance has exceeded the standard assembly offset distance, and the collaboration server will make a projection of an error prompt in the display device.
2. The augmented reality-based man-machine cooperative assembly system according to claim 1, wherein the AR glasses are further provided therein with a voice device for receiving voice commands, the collaboration server is provided therein with a rapid execution command character number, the collaboration server is capable of recognizing the voice commands, acquiring a real-time command character number of the voice commands, and determining the real-time command character number according to the rapid execution command character number,
if the number of the real-time instruction characters does not exceed the number of the quick execution instruction characters, extracting keywords from the voice instruction by the collaboration server, and executing actions according to the extracted keywords;
if the number of the real-time instruction characters exceeds the number of the quick execution instruction characters, the collaboration server extracts and matches instruction words of the voice instruction to determine the execution type of the voice instruction.
3. The augmented reality-based man-machine cooperative assembly system according to claim 2, wherein the collaboration server is provided with a first instruction word library and a second instruction word library, and when the number of real-time instruction characters exceeds the number of quick execution instruction characters, the collaboration server extracts instruction words from the voice instruction to obtain voice instruction words, and matches the voice instruction words with the first instruction word library and the second instruction word library,
if the voice command word is matched with the first command word library, the collaboration server judges that the execution type of the voice command is grammar recognition, carries out semantic understanding on the voice command, and controls and assists the robot to execute actions according to the semantic understanding result;
if the voice command word is matched with the second command word library, the collaboration server judges that the execution type of the voice command is dictation recognition, and converts the voice command into characters for input;
if the voice command word is not matched with the first command word stock and the second command word stock, the collaboration server judges that the voice command is an invalid command, and the collaboration server projects error prompts in the display device;
wherein, each instruction word in the first instruction word bank and each instruction word in the second instruction word bank have no repetition.
4. The augmented reality-based man-machine cooperative assembly system according to claim 1, wherein Vuforia SDK is directly connected in the development of the AR eyeglass terminal, ARFoundation SDK is sleeved on ARKit in the development of the environment processing terminal, vuforia SDK is sleeved on ARFoundation, the real environment of the man-machine cooperative assembly table is scanned into 3D Mesh information and stored by using the area target capability of Vuforia, the 3D Mesh is imported into a Unity 3D engine to form initial environment information, the field environment is scanned again by using the equipment bottom capabilities of ARKit and SLAM, the calculated real-time assembly information and the initial environment information are matched, the real environment information is generated and projected in the display device of AR eyeglass, and the projection of the real environment information is completed.
5. The augmented reality-based man-machine cooperative assembly system according to claim 1, wherein when the cooperative server calculates the graphic similarity St of shape information of any assembly type, the cooperative server acquires the real-time shape information of the assembly in an initial environment image, intercepts the real-time image of the assembly, scales the real-time image of the assembly to make the image area of the real-time image of the assembly equal to that of the shape information image of the assembly type stored in the assembly type database, and overlaps the two images to calculate the graphic similarity St;
where st=mc/Md, where Mc is the image area where the scaled assembly real-time image and the shape information image overlap, and Md is the image area of the shape information image.
CN202310755410.6A 2023-06-26 2023-06-26 Man-machine cooperative assembly system based on augmented reality Active CN116778119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310755410.6A CN116778119B (en) 2023-06-26 2023-06-26 Man-machine cooperative assembly system based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310755410.6A CN116778119B (en) 2023-06-26 2023-06-26 Man-machine cooperative assembly system based on augmented reality

Publications (2)

Publication Number Publication Date
CN116778119A CN116778119A (en) 2023-09-19
CN116778119B true CN116778119B (en) 2024-03-12

Family

ID=88007696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310755410.6A Active CN116778119B (en) 2023-06-26 2023-06-26 Man-machine cooperative assembly system based on augmented reality

Country Status (1)

Country Link
CN (1) CN116778119B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547554A (en) * 2017-09-08 2018-01-05 北京枭龙科技有限公司 A kind of smart machine remote assisting system based on augmented reality
CN109920062A (en) * 2019-02-01 2019-06-21 谷东科技有限公司 A kind of part changeable assembling guidance method and system based on AR glasses
CN110076277A (en) * 2019-05-07 2019-08-02 清华大学 Match nail method based on augmented reality
CN110309779A (en) * 2019-07-01 2019-10-08 南昌青橙视界科技有限公司 Assemble monitoring method, device and the electronic equipment of part operational motion in scene
CN110413122A (en) * 2019-07-30 2019-11-05 厦门大学嘉庚学院 A kind of AR eyewear applications method and system with operative scenario identification
CN110928418A (en) * 2019-12-11 2020-03-27 北京航空航天大学 Aviation cable auxiliary assembly method and system based on MR
CN111612177A (en) * 2020-05-18 2020-09-01 上海齐网网络科技有限公司 Interactive semantic based augmented reality intelligent operation and maintenance system
CN112085232A (en) * 2020-09-14 2020-12-15 武汉瑞莱保能源技术有限公司 Operation inspection system and method based on augmented reality technology
CN114567535A (en) * 2022-03-10 2022-05-31 北京鸿文汇智科技有限公司 Product interaction and fault diagnosis method based on augmented reality
CN115294308A (en) * 2022-08-15 2022-11-04 武汉烽火技术服务有限公司 Augmented reality auxiliary assembly operation guiding system based on deep learning
CN115309113A (en) * 2022-06-14 2022-11-08 浙江大华技术股份有限公司 Guiding method for part assembly and related equipment
CN115331132A (en) * 2022-08-16 2022-11-11 一汽丰田汽车有限公司 Detection method and device for automobile parts, electronic equipment and storage medium
CN115346413A (en) * 2022-08-19 2022-11-15 南京邮电大学 Assembly guidance method and system based on virtual-real fusion
CN115731170A (en) * 2022-11-14 2023-03-03 北京卫星环境工程研究所 Mobile projection type assembly process guiding method and system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547554A (en) * 2017-09-08 2018-01-05 北京枭龙科技有限公司 A kind of smart machine remote assisting system based on augmented reality
CN109920062A (en) * 2019-02-01 2019-06-21 谷东科技有限公司 A kind of part changeable assembling guidance method and system based on AR glasses
CN110076277A (en) * 2019-05-07 2019-08-02 清华大学 Match nail method based on augmented reality
CN110309779A (en) * 2019-07-01 2019-10-08 南昌青橙视界科技有限公司 Assemble monitoring method, device and the electronic equipment of part operational motion in scene
CN110413122A (en) * 2019-07-30 2019-11-05 厦门大学嘉庚学院 A kind of AR eyewear applications method and system with operative scenario identification
CN110928418A (en) * 2019-12-11 2020-03-27 北京航空航天大学 Aviation cable auxiliary assembly method and system based on MR
CN111612177A (en) * 2020-05-18 2020-09-01 上海齐网网络科技有限公司 Interactive semantic based augmented reality intelligent operation and maintenance system
CN112085232A (en) * 2020-09-14 2020-12-15 武汉瑞莱保能源技术有限公司 Operation inspection system and method based on augmented reality technology
CN114567535A (en) * 2022-03-10 2022-05-31 北京鸿文汇智科技有限公司 Product interaction and fault diagnosis method based on augmented reality
CN115309113A (en) * 2022-06-14 2022-11-08 浙江大华技术股份有限公司 Guiding method for part assembly and related equipment
CN115294308A (en) * 2022-08-15 2022-11-04 武汉烽火技术服务有限公司 Augmented reality auxiliary assembly operation guiding system based on deep learning
CN115331132A (en) * 2022-08-16 2022-11-11 一汽丰田汽车有限公司 Detection method and device for automobile parts, electronic equipment and storage medium
CN115346413A (en) * 2022-08-19 2022-11-15 南京邮电大学 Assembly guidance method and system based on virtual-real fusion
CN115731170A (en) * 2022-11-14 2023-03-03 北京卫星环境工程研究所 Mobile projection type assembly process guiding method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于增强现实技术的电气导线 连接器智能装配系统;王鹏鹏等;船舶工程;第44卷(第11期);112-117 *

Also Published As

Publication number Publication date
CN116778119A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109352658B (en) Industrial robot positioning control method, system and computer readable storage medium
CN104070265B (en) Weld seam information setting device, program, automatically teaching system and weld seam information setting method
CN112734945B (en) Assembly guiding method, system and application based on augmented reality
EP3855353A2 (en) Image table extraction method and apparatus, electronic device, and storage medium
CN113222184A (en) Equipment inspection system and method based on augmented reality AR
KR20150063703A (en) A Method for Block Inspection of a Vessel Using Augmented Reality Technology
CN109715307A (en) Bending machine with workspace image capture device and the method for indicating workspace
CN111145257A (en) Article grabbing method and system and article grabbing robot
KR101653878B1 (en) Block and user terminal for modeling 3d shape and the method for modeling 3d shape using the same
CN115393854B (en) Visual alignment processing method, terminal and storage medium
CN115661412A (en) Aero-engine auxiliary assembly system and method based on mixed reality
CN209304584U (en) A kind of industrial robot precise assembly system
CN116778119B (en) Man-machine cooperative assembly system based on augmented reality
JP2018180868A (en) Image processing device and manufacturing system
CN112732075B (en) Virtual-real fusion machine teacher teaching method and system for teaching experiments
Techasarntikul et al. Guidance and visualization of optimized packing solutions
CN114567535B (en) Product interaction and fault diagnosis method based on augmented reality
CN111660294B (en) Augmented reality control system of hydraulic heavy-duty mechanical arm
Voronin et al. Action recognition algorithm from visual sensor data for contactless robot control systems
CN111462341A (en) Augmented reality construction assisting method, device, terminal and medium
CN107340962A (en) Input method, device and virtual reality device based on virtual reality device
Constantin et al. Interactive multimodal robot dialog using pointing gesture recognition
CN106247933B (en) System based on the mobile control laser tracker of mobile terminal-server architecture
CN115409880B (en) Workpiece data registration method and device, electronic equipment and storage medium
CN112484730B (en) Method and system for realizing indoor material addressing navigation based on SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant