CN114201082A - Interaction method, equipment and storage medium for 3D scanning synthetic interface - Google Patents

Interaction method, equipment and storage medium for 3D scanning synthetic interface Download PDF

Info

Publication number
CN114201082A
CN114201082A CN202111413577.1A CN202111413577A CN114201082A CN 114201082 A CN114201082 A CN 114201082A CN 202111413577 A CN202111413577 A CN 202111413577A CN 114201082 A CN114201082 A CN 114201082A
Authority
CN
China
Prior art keywords
button
user
scanning
interface
synthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111413577.1A
Other languages
Chinese (zh)
Other versions
CN114201082B (en
Inventor
王湲
罗苇
张世琅
陈号
蒋成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Chizi Technology Co ltd
Original Assignee
Wuhan Chizi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Chizi Technology Co ltd filed Critical Wuhan Chizi Technology Co ltd
Priority to CN202111413577.1A priority Critical patent/CN114201082B/en
Publication of CN114201082A publication Critical patent/CN114201082A/en
Application granted granted Critical
Publication of CN114201082B publication Critical patent/CN114201082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an interaction method, equipment and a storage medium for a 3D scanning synthetic interface, which comprise the following steps: acquiring a scanning data layer; judging the size relationship between the number of the scanning data layers and the number of the preset scanning data layers; highlighting the adding button or the synthesizing button on the synthesizing interface according to the size relationship; receiving a first synthesis instruction acted on a synthesis button by a user, automatically aligning the scan data layers, and generating a first 3D model; displaying an operation button, an editing panel and a first character prompt box on a synthesis interface; receiving a corresponding operation instruction acted on an operation button or an editing panel by a user, and processing a scan data layer corresponding to the operation instruction; and generating a second 3D model according to the processed scanning data layer. The user is guided to further process the scanned data layer by highlighting the synthesis button or the addition button, displaying the operation button, the editing panel and the text prompt box, so that the user can complete the synthesis step with high quality, thereby synthesizing the 3D model.

Description

Interaction method, equipment and storage medium for 3D scanning synthetic interface
Technical Field
The present application relates to the field of 3D scanning technologies, and in particular, to an interaction method, device, and storage medium for a 3D scanning composite interface.
Background
The three-dimensional exhibition of the 3D scanning technology can show the aspects in social life, and based on the development of the scanning technology, the software can be used for carrying out multi-directional scanning on the structure of the object, thereby establishing a three-dimensional digital model of the object.
The 3D scanning product can acquire a plurality of scanning data of the same object in different postures through the scanner and automatically align, process and synthesize the scanning data into a more accurate three-dimensional grid. For an object easy to identify, a user can obtain an ideal 3D model only by using automatic alignment; for objects which are large in size, repetitive in structural texture, symmetrical in shape and the like and are difficult to recognize, a user needs to manually add alignment points, and the quality of model processing is improved.
For novice users, certain learning cost is needed in the synthesis step, all operation buttons and panels are displayed on the interface in the conventional design of the synthesis interface of the scanner at present, the working state and the available operation of the equipment must be memorized and carried out depending on the familiarity of the user with software, and the software interface design is easy to cause excessive information and difficult operation. The user cannot be guided efficiently in the composition step.
Disclosure of Invention
The invention aims to provide an interaction method, an interaction device and a storage medium of a 3D scanning composition interface, which can solve the problem of how to guide a user to complete a composition step during 3D scanning.
In order to achieve the above object, the present invention provides the following technical solutions:
in a first aspect, an interactive method for a 3D scanning composition interface is provided, including:
acquiring a scanning data layer;
judging the size relationship between the number of the scanning data layers and the number of preset scanning data layers;
highlighting an adding button or a synthesizing button on a synthesizing interface according to the size relation;
receiving a first synthesis instruction acted on the synthesis button by a user, automatically aligning the scan data layers, and generating a first 3D model;
and displaying an operation button, an editing panel and a first text prompt box on the synthesis interface, wherein the operation button comprises: the first character prompt box is used for prompting the functions of the operation buttons or the editing panel;
receiving a corresponding operation instruction acted on the operation button or the editing panel by a user, and processing the scan data layer corresponding to the operation instruction;
and generating a second 3D model according to the processed scanning data layer.
In a second aspect, there is provided a computer device comprising:
a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a scanning data layer;
judging the size relationship between the number of the scanning data layers and the number of preset scanning data layers;
highlighting an adding button or a synthesizing button on a synthesizing interface according to the size relation;
receiving a first synthesis instruction acted on the synthesis button by a user, automatically aligning the scan data layers, and generating a first 3D model;
and displaying an operation button, an editing panel and a first text prompt box on the synthesis interface, wherein the operation button comprises: the first character prompt box is used for prompting the functions of the operation buttons or the editing panel;
receiving a corresponding operation instruction acted on the operation button or the editing panel by a user, and processing the scan data layer corresponding to the operation instruction;
and generating a second 3D model according to the processed scanning data layer.
In a third aspect, a computer-readable storage medium is provided, comprising: a computer program is stored which, when executed by a processor, causes the processor to perform the steps of:
acquiring a scanning data layer;
judging the size relationship between the number of the scanning data layers and the number of preset scanning data layers;
highlighting an adding button or a synthesizing button on a synthesizing interface according to the size relation;
receiving a first synthesis instruction acted on the synthesis button by a user, automatically aligning the scan data layers, and generating a first 3D model;
and displaying an operation button, an editing panel and a first text prompt box on the synthesis interface, wherein the operation button comprises: the first character prompt box is used for prompting the functions of the operation buttons or the editing panel;
receiving a corresponding operation instruction acted on the operation button or the editing panel by a user, and processing the scan data layer corresponding to the operation instruction;
and generating a second 3D model according to the processed scanning data layer.
According to the interaction method, the interaction device and the interaction storage medium of the 3D scanning synthesis interface, firstly, the adding button or the synthesis button is highlighted according to the size relationship between the number of the scanning data layers and the number of the preset scanning data layers, when the number of the scanning data layers is insufficient, a user is guided to add the scanning data layers, and the situation that a 3D model cannot be generated or the generated 3D model is defective due to the insufficient number of the scanning layers is avoided; when the number of the scanning data layers is enough, the user is guided to synthesize the 3D model, and the scanning layers do not need to be added in a time-wasting mode. It is convenient for a user unfamiliar with the operational flow of the 3D scanning synthesis step to synthesize a 3D model with high quality.
And secondly, when the user clicks a synthesis button, automatically aligning the scanning data to automatically generate a 3D model, and when the user is not satisfied with the automatically generated 3D model, knowing the functions of other buttons through a text prompt box of a synthesis interface, selecting an adding button or a manual alignment button to correspondingly process a scanning data layer, and generating the 3D model according to the processed scanning data layer. The user is guided to further process the scanned data layer by displaying the operation buttons, the editing panel and the text prompt box, so that the user can complete the synthesis step with high quality, and the 3D model is synthesized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a flow diagram of an interaction method for a 3D scan composition interface in one embodiment;
FIG. 2 is a block diagram of an interaction device for a 3D scan composition interface in one embodiment;
FIG. 3 is a diagram of the internal structure of a computer device in one embodiment;
FIG. 4 is a diagram illustrating an interaction method of a 3D scan composition interface in one embodiment;
FIG. 5 is a diagram illustrating an interaction method of a 3D scan composition interface in one embodiment;
FIG. 6 is a diagram illustrating an interaction method of a 3D scan composition interface in one embodiment;
FIG. 7 is a diagram illustrating an interaction method of a 3D scan composition interface in one embodiment;
FIG. 8 is a diagram illustrating an interaction method of a 3D scan composition interface in one embodiment;
wherein, 1, editing a panel; 2. an object to be scanned; 3. a working window; 4. a first text prompt box; 5. a manual alignment button; 6. an add button; 7. a synthesis button; 8. popping windows; 9. a handheld mode button; 10. a turntable mode button; 11. a second text prompt box; 12. a third text prompt box; 13. and a fourth text prompt box.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is noted that the terms "comprises," "comprising," and "having" and any variations thereof in the description and claims of this application and the drawings described above are intended to cover non-exclusive inclusions. For example, a process, method, terminal, product, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements but may alternatively include other steps or elements not listed or inherent to such process, method, product, or apparatus. In the claims, the description and the drawings of the specification of the present application, relational terms such as "first" and "second", and the like, may be used solely to distinguish one entity/action/object from another entity/action/object without necessarily requiring or implying any actual such relationship or order between such entities/actions/objects.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
When a new user uses 3D scanning software, it is unclear how to operate in the synthesis step to synthesize the 3D model, and it is unclear how many scanned data layers are needed to synthesize the 3D model well, because the new user is not familiar with the software. Therefore, it is necessary to guide a new user in the synthesis step, help determine whether the existing scan data layer is enough to generate a 3D model, and guide the user to add a scan data layer or directly generate a 3D model according to the determination result.
As shown in fig. 1, an interactive method for a 3D scanning composite interface is provided, which specifically includes the following steps:
step 101, a terminal acquires a scanning data layer.
After the user finishes scanning, the terminal obtains a scanning data layer, wherein the scanning data layer is original point cloud data obtained by scanning of a scanner, and the original data of a grid is not generated through computer processing. The terminal comprises a computer or a tablet computer.
And 102, the terminal judges the size relation between the number of the scanning data layers and the number of the preset scanning data layers.
The terminal is internally preset with the number of preset scanning data layers, and the number of the preset scanning data layers can be set according to actual needs. For example, according to experience, generally 3 or more than 3 scan data layers are needed to synthesize a 3D model, so the number of preset scan data layers may be set to 3, and if the number of preset scan data layers acquired by the terminal is less than 3, it is likely that the number of scan data layers is insufficient, and it is preferable to add the number of scan data layers so that the scan data layers are sufficient to generate the 3D model. If the number of the preset scanning data layers acquired by the terminal is greater than or equal to 3, the number of the scanning data layers is enough, and the model can be generated without adding the layers. It is convenient for a user unfamiliar with the operational flow of the 3D scanning synthesis step to synthesize a 3D model with high quality.
And 103, if the number of the scanning data layers is greater than or equal to the preset number of the scanning data layers, the terminal highlights a synthesis button 7 on a synthesis interface.
When the number of the scanning data layers is greater than or equal to the preset number of the scanning data layers, it is indicated that the number of the scanning data layers acquired by the terminal is enough to generate the 3D model according to experience, and the model can be generated without adding the layers. At this time, the terminal highlights the combination button 7 on the combination interface, and the highlighting may be a highlight, for example, the color of the combination button 7 is adjusted to a bright color such as red or yellow, or the combination button 7 may be blinked. The purpose of the highlighting is to guide the user to click the composition button 7 and thereby to synthesize the 3D model.
And 104, the terminal receives a first synthesis instruction acted on the synthesis button 7 by a user, automatically aligns the scan data layers, and generates a first 3D model.
After the user clicks the synthesis button 7, the terminal receives a first synthesis instruction acted on the synthesis button 7 by the user, and at the moment, the terminal automatically aligns the scan data layers to obtain the 3D model. For easily identifiable objects, a 3D model can be obtained by automatic alignment.
105, the terminal displays an operation button, an editing panel 1 and a first character prompt box 4 on the synthesis interface, wherein the operation button comprises: an adding button 6, a synthesizing button 7 and a manual aligning button 5, wherein the first text prompt box 4 is used for prompting the functions of the operation buttons or the editing panel 1.
When the user is not satisfied with the 3D model generated by automatic alignment due to insufficient scanned data layers or complex scanned objects, the add button 6 or the manual alignment button 5 may be clicked on the synthesis interface. As shown in fig. 4, text boxes are displayed near the add button 6 and the manual alignment button 5, and the functions of the add button 6 and the manual alignment button 5 are described in the text boxes. For example, the text prompt box of the add button 6 can be written with: clicking an adding button 6 to return to a scanning state and add a scanning data layer; the text prompt box of the manual alignment button 5 can be written with: and a manual alignment button 5 is clicked, so that the data of the scanned image layer can be manually aligned, and a 3D model with better effect is generated. The conventional flow of 3D scanning is: scan-edit-synthesize. When the user clicks the add button 6, the terminal returns to the scanning state, a scanning interface is displayed, and the user can add scanning layer data on the scanning interface. When the user clicks the manual alignment button 5, manual alignment is started, and a 3D model is generated. The user is guided to further process the scanned data layer by displaying the operation buttons, the editing panel 1 and the text prompt box, so that the user can complete the synthesis step with high quality, and the 3D model is synthesized.
And 106, receiving a corresponding operation instruction acted on the operation button or the editing panel 1 by a user, and processing the scan data layer corresponding to the operation instruction.
When a user clicks the adding button 6, the terminal returns to a scanning state, a scanning interface is displayed, the user can select a handheld mode or a rotary table mode on the scanning interface, common 3D scanners are divided into a handheld 3D scanner and a rotary table scanner, the handheld mode is that a scanned object is fixed, and the user holds the 3D scanner to scan the scanned object; in the turntable mode, a user places a scanning object on a turntable, and the scanner is fixed to scan the scanning object. Scan map layer data is added. And the user scans the scanned object in a handheld mode or a rotary table mode to obtain a new scanning data layer. After acquiring a new scan layer, the user may select automatic alignment or manual alignment. When a user clicks the manual alignment button 5, manual alignment is started, for example, a cup is scanned with two different postures, one is placed in a standing state, and the other is placed in an inverted state. When the data layers are manually aligned, as shown in fig. 6, the terminal first displays a second text prompt box 11 on the editing panel 1, where the second text prompt box 11 is used to prompt the user to select a target scan data layer to be aligned; after the client selects the target scan data layer, as shown in fig. 7, the terminal displays a third text prompt box 12 on the editing panel 1, where the third text prompt box 12 is used to prompt the user to select a scan data layer aligned with the target scan data layer; after the customer selects the scan data layer aligned with the target scan data layer, as shown in fig. 8, the terminal displays a fourth text prompt box 13 on the editing panel 1, where the fourth text prompt box 13 is used to prompt the user to add a mark point pair, and the mark point pair is a mark point used for alignment when processing a plurality of scanned point cloud data, and two mark points are added to each of the two point cloud data, so that the two mark points form a point pair.
And step 107, generating a second 3D model according to the processed scanning data layer.
And when a user adds a scanning data layer or manually aligns the scanning data layer, generating a 3D model according to the processed scanning data layer.
According to the interaction method of the 3D scanning synthesis interface, firstly, the adding button 6 or the synthesis button 7 is highlighted according to the size relationship between the number of the scanning data layers and the number of the preset scanning data layers, and when the number of the scanning data layers is insufficient, a user is guided to add the scanning data layers, so that the situation that a 3D model cannot be generated or the generated 3D model is defective due to the insufficient number of the scanning layers is avoided; when the number of the scanning data layers is enough, the user is guided to synthesize the 3D model, and the scanning layers do not need to be added in a time-wasting mode. It is convenient for a user unfamiliar with the operational flow of the 3D scanning synthesis step to synthesize a 3D model with high quality.
Secondly, when the user clicks the synthesis button 7, the scanned data are automatically aligned to automatically generate a 3D model, and when the user is not satisfied with the automatically generated 3D model, the functions of other buttons are known through a text prompt box of a synthesis interface, so that the addition button 6 or the manual alignment button 5 is selected to perform corresponding processing on the scanned data layer, and the 3D model is generated according to the processed scanned data layer. The user is guided to further process the scanned data layer by displaying the operation buttons, the editing panel 1 and the text prompt box, so that the user can complete the synthesis step with high quality, and the 3D model is synthesized.
In one embodiment, the highlighting includes: highlighted or blinking.
The highlighting may be highlighting, for example, by adjusting the color of the combination button 7 to a bright color such as red or yellow, or by flashing the combination button 7. The purpose of the highlighting is to guide the user to click the composition button 7 and thereby to synthesize the 3D model.
In an embodiment, after the determining the size relationship between the number of scanned data layers and the number of preset scanned data layers, the method further includes: and if the number of the scanning data layers is smaller than the number of the preset scanning data layers, the terminal highlights an adding button 6 on a synthesis interface.
The terminal is internally preset with the number of preset scanning data layers, and the number of the preset scanning data layers can be set according to actual needs. For example, according to experience, generally 3 or more than 3 scan data layers are needed to synthesize a 3D model, so the number of preset scan data layers may be set to 3, and if the number of preset scan data layers acquired by the terminal is less than 3, it is likely that the number of scan data layers is insufficient, and it is preferable to add the number of scan data layers so that the scan data layers are sufficient to generate the 3D model. It is convenient for a user unfamiliar with the operational flow of the 3D scanning synthesis step to synthesize a 3D model with high quality.
In an embodiment, after the highlighting the add button 6 on the composition interface if the number of scan data layers is smaller than the preset number of scan data layers, the method further includes: the terminal receives a first adding instruction acted on the adding button 6 by a user, and displays a popup window 8 on the composite interface, wherein the popup window 8 comprises: a hand-held mode button 9, a turntable mode button 10; receiving a first switching instruction acted on the handheld mode or the turntable mode by a user; and displaying a handheld scanning interface or a rotary table scanning interface according to the first switching instruction.
The conventional flow of the 3D scanning is as follows: scan-edit-synthesize. When the terminal highlights the adding button 6, and a user clicks the adding button 6, as shown in fig. 5, the terminal returns to a scanning state and displays a scanning interface, the user can select a handheld mode or a turntable mode on the scanning interface, commonly used 3D scanners are divided into a handheld 3D scanner and a turntable scanner, the handheld mode is that a scanned object is fixed, and the user himself holds the 3D scanner to scan the scanned object; in the turntable mode, a user places a scanning object on a turntable, and the scanner is fixed to scan the scanning object. Scan map layer data is added. And the user scans the scanned object in a handheld mode or a rotary table mode to obtain a new scanning data layer.
In one embodiment, the receiving a corresponding operation instruction of a user acting on the operation button or the editing panel 1, and performing processing corresponding to the operation instruction on the scan data layer includes: the terminal receives a second adding instruction acted on the adding button 6 by the user, and displays a popup window 8 on the composite interface, wherein the popup window 8 comprises: a hand-held mode button 9, a turntable mode button 10;
receiving a second switching instruction acted on the handheld mode or the turntable mode by a user; and displaying a handheld scanning interface or a rotary table scanning interface according to the second switching instruction.
After the user is unsatisfied with the model generated by automatic alignment, the adding button 6 is clicked, the terminal returns to a scanning state, a scanning interface is displayed, the user can select a handheld mode or a rotary table mode on the scanning interface, common 3D scanners are divided into a handheld 3D scanner and a rotary table scanner, the handheld mode is that a scanned object is fixed, and the user himself holds the 3D scanner to scan the scanned object; in the turntable mode, a user places a scanning object on a turntable, and the scanner is fixed to scan the scanning object. Scan map layer data is added. And the user scans the scanned object in a handheld mode or a rotary table mode to obtain a new scanning data layer.
In an embodiment, the receiving a corresponding operation instruction applied to the operation button or the editing panel 1 by a user, and performing processing corresponding to the operation instruction on the scan data layer further includes: the terminal receives a manual alignment instruction acted on the manual alignment button 5 by a user; displaying a second text prompt box 11 on the editing panel 1 according to the manual alignment instruction, wherein the second text prompt box 11 is used for prompting a user to select a target scan data layer to be aligned; receiving a first selection instruction acted on the editing panel 1 by a user, selecting a target scanning data layer, and displaying a third text prompt box 12 on the editing panel 1, wherein the third text prompt box 12 is used for prompting the user to select a scanning data layer aligned with the target scanning data layer; receiving a second selection instruction acted on the editing panel 1 by a user, selecting a scan data layer aligned with a target scan data layer, and displaying a fourth text prompt box 13 on the editing panel 1, wherein the fourth text prompt box 13 is used for prompting the user to add a mark point pair; and receiving a third adding instruction acted on the editing panel 1 by the user, and adding the mark point pair.
When a user clicks the manual alignment button 5, manual alignment is started, for example, a cup is scanned by two different postures, one is placed in a standing state, and the other is placed in an inverted state. When manual alignment is carried out, the terminal firstly displays a character prompt box on the editing panel 1, and the second character prompt box 11 is used for prompting a user to select a target scanning data layer to be aligned; after the client selects the target scanning data layer, the terminal displays a third text prompt box 12 on the editing panel 1, wherein the third text prompt box 12 is used for prompting the user to select the scanning data layer aligned with the target scanning data layer; after the customer selects the scanning data image layer aligned with the target scanning data image layer, the terminal displays a fourth character prompt box 13 on the editing panel 1, the fourth character prompt box 13 is used for prompting the user to add mark point pairs, the mark point pairs are mark points used for alignment when a plurality of scanned point cloud data are processed, one mark point is added to each of the two point cloud data, and then the two mark points form one point pair.
In one embodiment, after said highlighting the composition button 7 on said composition interface, further comprises: the terminal receives a second synthesis instruction acted on the synthesis button 7 by the user; generating the second 3D model according to the second synthesis instruction and the mark point pairs.
After the user manually aligns, the terminal clicks the synthesis button 7, and generates a 3D model according to the synthesis instruction and the mark point pair.
As shown in fig. 2, there is provided an interactive device for a 3D scan composition interface, the device comprising:
the acquisition module is used for acquiring a scanning data layer;
the judging module is used for judging the size relationship between the number of the scanning data layers and the number of preset scanning data layers;
the first display module is used for highlighting a synthesis button 7 on a synthesis interface if the number of the scanning data layers is greater than or equal to the preset number of the scanning data layers;
a first generation module, configured to receive a first synthesis instruction acted on the synthesis button 7 by a user, automatically align the scan data image layers, and generate a first 3D model;
the second display module is used for displaying an operation button, an editing panel 1 and a first character prompt box 4 on the synthesis interface, wherein the operation button comprises: an adding button 6, a synthesizing button 7 and a manual alignment button 5, wherein the first text prompt box 4 is used for prompting the functions of the operation buttons or the editing panel 1;
a receiving module, configured to receive a corresponding operation instruction acted on the operation button or the editing panel 1 by a user, and perform processing corresponding to the operation instruction on the scan data layer;
and the second generation module is used for generating a second 3D model according to the processed scanning data layer.
Through the interaction device of the 3D scanning synthesis interface, firstly, the adding button 6 or the synthesis button 7 is highlighted according to the size relationship between the number of the scanning data layers and the number of the preset scanning data layers, and when the number of the scanning data layers is insufficient, a user is guided to add the scanning data layers, so that a 3D model cannot be generated or the generated 3D model has defects due to insufficient number of the scanning layers; when the number of the scanning data layers is enough, the user is guided to synthesize the 3D model, and the scanning layers do not need to be added in a time-wasting mode. It is convenient for a user unfamiliar with the operational flow of the 3D scanning synthesis step to synthesize a 3D model with high quality.
Secondly, when the user clicks the synthesis button 7, the scanned data are automatically aligned to automatically generate a 3D model, and when the user is not satisfied with the automatically generated 3D model, the functions of other buttons are known through a text prompt box of a synthesis interface, so that the addition button 6 or the manual alignment button 5 is selected to perform corresponding processing on the scanned data layer, and the 3D model is generated according to the processed scanned data layer. The user is guided to further process the scanned data layer by displaying the operation buttons, the editing panel 1 and the text prompt box, so that the user can complete the synthesis step with high quality, and the 3D model is synthesized.
In one embodiment, the highlighting includes: highlighted or blinking.
In one embodiment, the apparatus further comprises: and the third display module is used for highlighting an adding button 6 on a synthesis interface if the number of the scanning data layers is less than the number of the preset scanning data layers.
In one embodiment, the third display module is further configured to receive a first add instruction from a user acting on the add button 6, and display a pop-up window 8 on the composite interface, where the pop-up window 8 includes: a hand-held mode button 9, a turntable mode button 10; receiving a first switching instruction acted on the handheld mode or the turntable mode by a user; and displaying a handheld scanning interface or a rotary table scanning interface according to the first switching instruction.
In one embodiment, the receiving module is further configured to receive a second add instruction from the user on the add button 6, and display a pop-up window 8 on the composite interface, where the pop-up window 8 includes: a hand-held mode button 9, a turntable mode button 10; receiving a second switching instruction acted on the handheld mode or the turntable mode by a user; and displaying a handheld scanning interface or a rotary table scanning interface according to the second switching instruction.
In one embodiment, the receiving module is further configured to receive a manual alignment instruction from a user acting on the manual alignment button 5; displaying a second text prompt box 11 on the editing panel 1 according to the manual alignment instruction, wherein the second text prompt box 11 is used for prompting a user to select a target scan data layer to be aligned; receiving a first selection instruction acted on the editing panel 1 by a user, selecting a target scanning data layer, and displaying a third text prompt box 12 on the editing panel 1, wherein the third text prompt box 12 is used for prompting the user to select a scanning data layer aligned with the target scanning data layer; receiving a second selection instruction acted on the editing panel 1 by a user, selecting a scan data layer aligned with a target scan data layer, and displaying a fourth text prompt box 13 on the editing panel 1, wherein the fourth text prompt box 13 is used for prompting the user to add a mark point pair; and receiving a third adding instruction acted on the editing panel 1 by the user, and adding the mark point pair.
In one embodiment, the receiving module is further configured to highlight the composition button 7 on said composition interface.
In one embodiment, the receiving module is further configured to receive a second synthesis instruction applied to the synthesis button 7 by the user; generating the second 3D model according to the second synthesis instruction and the mark point pairs. As shown in fig. 3, the computer device includes a processor, a memory, and a network interface connected by a terminal bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device has a storage operation terminal, and may further have a storage computer program, and when the computer program is executed by the processor, the computer program may enable the processor to implement the above-mentioned interaction method for the 3D scan synthesis interface. The internal memory may also store a computer program, and when the computer program is executed by the processor, the computer program may cause the processor to execute the above-mentioned interaction method for the 3D scan synthesis interface. Those skilled in the art will appreciate that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation on the devices to which the present application may be applied, and that a particular device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer-readable storage medium is proposed, having stored a computer program, which, when executed by a processor, causes the processor to perform the steps of the above-described method of interaction of a 3D scan composition interface.
It is to be understood that the above-described interaction method, apparatus and storage medium for a 3D scan composition interface belong to one general inventive concept, and embodiments are applicable to each other.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An interactive method for a 3D scanning composition interface, the method comprising:
acquiring a scanning data layer;
judging the size relationship between the number of the scanning data layers and the number of preset scanning data layers;
if the number of the scanning data layers is larger than or equal to the number of the preset scanning data layers, a synthesis button is highlighted on a synthesis interface;
receiving a first synthesis instruction acted on the synthesis button by a user, automatically aligning the scan data layers, and generating a first 3D model;
and displaying an operation button, an editing panel and a first text prompt box on the synthesis interface, wherein the operation button comprises: the first character prompt box is used for prompting the functions of the operation buttons or the editing panel;
receiving a corresponding operation instruction acted on the operation button or the editing panel by a user, and processing the scan data layer corresponding to the operation instruction;
and generating a second 3D model according to the processed scanning data layer.
2. The interactive method for 3D scan composition interface of claim 1, wherein the highlighting comprises: highlighted or blinking.
3. The interaction method for the 3D scan composite interface according to claim 1, further comprising, after the determining the size relationship between the number of scan data layers and the number of preset scan data layers:
and if the number of the scanning data layers is smaller than the number of the preset scanning data layers, highlighting and displaying an adding button on a synthesis interface.
4. The method according to claim 3, wherein after the highlighting of the add button on the composition interface if the number of scan data layers is smaller than the number of preset scan data layers, the method further comprises:
receiving a first adding instruction acted on the adding button by a user, and displaying a popup window on the composite interface, wherein the popup window comprises: a hand-held mode button, a turntable mode button;
receiving a first switching instruction acted on the handheld mode or the turntable mode by a user;
and displaying a handheld scanning interface or a rotary table scanning interface according to the first switching instruction.
5. The method for interacting the 3D scan composite interface according to claim 1, wherein the receiving a corresponding operation instruction from a user on the operation button or the editing panel, and performing a process corresponding to the operation instruction on the scan data layer includes:
receiving a second adding instruction acted on the adding button by a user, and displaying a popup window on the composite interface, wherein the popup window comprises: a hand-held mode button, a turntable mode button;
receiving a second switching instruction acted on the handheld mode or the turntable mode by a user;
and displaying a handheld scanning interface or a rotary table scanning interface according to the second switching instruction.
6. The method for interacting the 3D scan composite interface according to claim 1, wherein the receiving a corresponding operation instruction from a user on the operation button or the editing panel, and performing a process corresponding to the operation instruction on the scan data layer further includes:
receiving a manual alignment instruction acted on the manual alignment button by a user;
displaying a second text prompt box on the editing panel according to the manual alignment instruction, wherein the second text prompt box is used for prompting a user to select a target scan data layer to be aligned;
receiving a first selection instruction acted on the editing panel by a user, selecting a target scanning data layer, and displaying a third text prompt box on the editing panel, wherein the third text prompt box is used for prompting the user to select a scanning data layer aligned with the target scanning data layer;
receiving a second selection instruction acted on the editing panel by a user, selecting a scanning data layer aligned with a target scanning data layer, and displaying a fourth text prompt box on the editing panel, wherein the fourth text prompt box is used for prompting the user to add a mark point pair;
and receiving a third adding instruction acted by the user on the editing panel, and adding a mark point pair.
7. The interactive method for 3D scan composition interface of claim 6, further comprising, after the receiving a third adding instruction from the user to the editing panel to add a marker point pair:
and highlighting a synthesis button on the synthesis interface.
8. The interactive method for 3D scanning composition interface of claim 7, further comprising, after said highlighting composition buttons on said composition interface:
receiving a second synthesis instruction acted on the synthesis button by the user;
generating the second 3D model according to the second synthesis instruction and the mark point pairs.
9. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of interacting with a 3D scan composition interface of any of claims 1 to 8.
10. A computer-readable storage medium, characterized in that a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method of interaction of a 3D scan composition interface according to any of claims 1 to 8.
CN202111413577.1A 2021-11-25 2021-11-25 Interaction method, device and storage medium of 3D scanning synthesis interface Active CN114201082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111413577.1A CN114201082B (en) 2021-11-25 2021-11-25 Interaction method, device and storage medium of 3D scanning synthesis interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111413577.1A CN114201082B (en) 2021-11-25 2021-11-25 Interaction method, device and storage medium of 3D scanning synthesis interface

Publications (2)

Publication Number Publication Date
CN114201082A true CN114201082A (en) 2022-03-18
CN114201082B CN114201082B (en) 2024-07-26

Family

ID=80648994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111413577.1A Active CN114201082B (en) 2021-11-25 2021-11-25 Interaction method, device and storage medium of 3D scanning synthesis interface

Country Status (1)

Country Link
CN (1) CN114201082B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105988746A (en) * 2015-01-30 2016-10-05 深圳市亿思达科技集团有限公司 3D (three-dimensional) printing method and electronic equipment
CN109792488A (en) * 2016-10-10 2019-05-21 高通股份有限公司 User interface to assist three-dimensional sweep object
CN111641856A (en) * 2020-05-22 2020-09-08 海信视像科技股份有限公司 Prompt message display method for guiding user operation in display equipment and display equipment
USRE48221E1 (en) * 2010-12-06 2020-09-22 3Shape A/S System with 3D user interface integration
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN112739974A (en) * 2018-09-19 2021-04-30 阿泰克欧洲公司 Three-dimensional scanner with data collection feedback

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE48221E1 (en) * 2010-12-06 2020-09-22 3Shape A/S System with 3D user interface integration
CN105988746A (en) * 2015-01-30 2016-10-05 深圳市亿思达科技集团有限公司 3D (three-dimensional) printing method and electronic equipment
CN109792488A (en) * 2016-10-10 2019-05-21 高通股份有限公司 User interface to assist three-dimensional sweep object
CN112739974A (en) * 2018-09-19 2021-04-30 阿泰克欧洲公司 Three-dimensional scanner with data collection feedback
CN111641856A (en) * 2020-05-22 2020-09-08 海信视像科技股份有限公司 Prompt message display method for guiding user operation in display equipment and display equipment
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张嘉伟等: "基于云计算的医学影像处理与3D打印平台", 基于云计算的医学影像处理与3D打印平台, vol. 40, no. 05, 15 May 2019 (2019-05-15), pages 115 - 127 *
杨庆运;: "三维动态可视化技术在水利工程中的应用研究", 工程技术研究, no. 08, pages 134 - 135 *
范佳伟: "基于kinect的人体3D扫描建模软件的设计与实现", 中国优秀硕士学位论文全文数据库 (工程科技Ⅰ辑), no. 02, pages 024 - 869 *

Also Published As

Publication number Publication date
CN114201082B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
US10311115B2 (en) Object search method and apparatus
CN105630525B (en) Page synchronization method and device
CN110727587A (en) Test data acquisition method and device, storage medium and computer equipment
CN111477135B (en) Screen display method, device and storage medium
US11010932B2 (en) Method and apparatus for automatic line drawing coloring and graphical user interface thereof
CN111046644A (en) Answer sheet template generation method, identification method, device and storage medium
CN110765995A (en) Answer sheet generation method, answer sheet identification device and storage medium
CN114444439B (en) Test question set file generation method and device, electronic equipment and storage medium
US20090169113A1 (en) Automatic and Semi-Automatic Detection of Planar Shapes from 2D Images
CN110531977B (en) Automatic control method and device for instrument, computer equipment and storage medium
US20200118305A1 (en) Automatic line drawing coloring program, automatic line drawing coloring apparatus, and graphical user interface program
CN116664722B (en) Intelligent environment-friendly big data visualization processing method and system
CN114201082A (en) Interaction method, equipment and storage medium for 3D scanning synthetic interface
CN114116514A (en) User interface test acceptance method
CN111124903A (en) Element coordinate automatic testing method based on visual system and storage medium
CN111783408B (en) Cell pasting method and device, electronic equipment and storage medium
CN115830599B (en) Industrial character recognition method, model training method, device, equipment and medium
CN114968221B (en) Front-end-based low-code arranging system and method
CN114356156B (en) Interaction method, device and storage medium of 3D scanning interface
CN114996156A (en) Method and device for testing small program, electronic equipment and readable storage medium
CN115550546A (en) Focal length adjusting method, device, equipment and storage medium
CN111091598A (en) Multi-light-spot synchronous measurement and analysis method and device
CN114764560A (en) Flow form generation method, equipment, storage medium and device
CN116402745B (en) Method and system for intelligently controlling PCB (printed Circuit Board) cutting
CN112767334A (en) Skin problem detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant