CN109407942B - Model processing method and device, control client and storage medium - Google Patents

Model processing method and device, control client and storage medium Download PDF

Info

Publication number
CN109407942B
CN109407942B CN201811300475.7A CN201811300475A CN109407942B CN 109407942 B CN109407942 B CN 109407942B CN 201811300475 A CN201811300475 A CN 201811300475A CN 109407942 B CN109407942 B CN 109407942B
Authority
CN
China
Prior art keywords
model
display
navigation
client
display client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811300475.7A
Other languages
Chinese (zh)
Other versions
CN109407942A (en
Inventor
白桦
王欢
任志宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tiannuo Taian Health Information Technology Co ltd
Original Assignee
Shenzhen Tiannuo Taian Health Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tiannuo Taian Health Information Technology Co ltd filed Critical Shenzhen Tiannuo Taian Health Information Technology Co ltd
Priority to CN201811300475.7A priority Critical patent/CN109407942B/en
Publication of CN109407942A publication Critical patent/CN109407942A/en
Application granted granted Critical
Publication of CN109407942B publication Critical patent/CN109407942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The disclosure relates to a model processing method, a model processing device, a control client and a storage medium. The method comprises the following steps: receiving a target instruction input by a user; and when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structural model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle. The embodiment can rotate the displayed structural model and the navigation model, thereby realizing man-machine interaction and increasing the interaction between the model and people.

Description

Model processing method and device, control client and storage medium
Technical Field
The present disclosure relates to the field of teaching, and in particular, to a model processing method and apparatus, a control client, and a storage medium.
Background
With the popularization and development of computer skills, more and more teachers adopt multimedia teaching. Compared with blackboard writing, the multimedia teaching is more vivid and flexible, repeated writing is not needed in the multimedia teaching, and the workload of teachers is reduced. At present, multimedia teaching shows videos, texts or two-dimensional images through documents, but cannot show three-dimensional images capable of interacting with users.
Disclosure of Invention
The embodiment of the disclosure provides a model processing method, a model processing device, a control client and a storage medium. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a model processing method, including:
receiving a target instruction input by a user;
when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structures of the first object and the second object.
Optionally, the method further includes:
sending the first rotation result to a display client; so that the display client displays the rotated structure model and the rotated navigation model according to the first rotation result.
Optionally, before receiving the target instruction input by the user, the method further includes:
receiving a display instruction sent by the display client; the display instruction carries an identifier of a first object;
querying the navigation model and the structural model corresponding to the identifier from a model library;
and sending the navigation model and the structure model to the display client so that the display client can display the structure model in a first window and display the navigation model in a second window.
Optionally, after sending the navigation model and the structure model to the display client, the method further includes:
receiving a target instruction sent by the display client;
when the target instruction is other control instructions for changing the structure model, executing corresponding control operation on the structure model;
sending a control result to a display client; so that the display client displays the controlled structure model according to the control result.
Optionally, after receiving the target instruction sent by the display client, the method further includes:
when the target instruction is an automatic playing instruction, controlling the structure model and the navigation model to rotate along a preset direction according to a preset angular speed; or, obtaining an animation video, wherein the animation video comprises a plurality of frames of images which display the structural model and the navigation model at the same angle; wherein the angle is different in each frame image;
and sending the second rotation result or the animation video to a display client.
According to a second aspect of the embodiments of the present disclosure, there is provided a model processing apparatus including:
the receiving module is used for receiving a target instruction input by a user;
the rotation module is used for controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle when the target instruction is a manual rotation instruction carrying the rotation direction and the rotation angle;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structure of the first object and the second object.
Optionally, the apparatus further comprises:
the sending module is used for sending the first rotation result to the display client; so that the display client displays the rotated structure model and the rotated navigation model according to the first rotation result.
Optionally, the apparatus further comprises:
the receiving module is further configured to: receiving a display instruction sent by the display client; the display instruction carries an identifier of a first object;
the query module is used for querying the navigation model and the structural model corresponding to the identifier from a model library;
the sending module is further configured to: and sending the navigation model and the structure model to the display client, so that the display client can display the structure model in a first window and display the navigation model in a second window.
According to a third aspect of the embodiments of the present disclosure, there is provided a control client, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving a target instruction input by a user;
when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structures of the first object and the second object.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, performs the steps of the method of any one of the first aspects.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the displayed structural model and the navigation model can be rotated, so that man-machine interaction is realized, and interaction between the model and people is increased.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a navigation model processing method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a navigation model processing method according to an exemplary embodiment.
FIG. 3 is a flow diagram illustrating a navigation model processing method according to an exemplary embodiment.
FIG. 4 is a flow diagram illustrating a navigation model processing method according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating a navigation model processing apparatus according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating a navigation model processing apparatus according to an exemplary embodiment.
FIG. 7 is a block diagram illustrating a navigation model processing apparatus according to an exemplary embodiment.
FIG. 8 is a block diagram illustrating a navigation model processing apparatus according to an exemplary embodiment.
FIG. 9 is a block diagram illustrating a navigation model processing apparatus according to an exemplary embodiment.
FIG. 10 is a block diagram illustrating a control client according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a model processing method according to an exemplary embodiment, where, as shown in fig. 1, the model processing method is used in a model processing apparatus applied to a control client, the method includes the following steps 101-102:
step 101, receiving a target instruction input by a user.
The target instruction is a target instruction that displays user input received by the client. The target instruction may be implemented by keyboard entry, by mouse click, and screen touch.
And 102, when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structural model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle.
Wherein the navigation model of the second object comprises a contour model of the second object and a contour model of a first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structure of the first object and the second object.
Here, the rotation is performed by tilting or offsetting the structure model by a certain angle along the coordinate system of the structure model and the coordinates of the navigation model. And displaying the first object displayed by the structure model and the second object displayed by the navigation model according to the preset standard of the first object in the initial state.
By way of example, it is assumed that the first object is a heart and the second object is a human body. The navigation model comprises a contour model of the human body and a contour model of the heart, the contour model of the heart is arranged in the contour model of the human body, and the relative position of the contour model of the heart and the contour model of the human body is in accordance with the relative position of the human body and the heart. A contour model is a model that only shows contours. The structural model is a model that shows the internal structure.
The embodiment can rotate the displayed structural model and the navigation model, thereby realizing man-machine interaction and increasing the interaction between the model and people.
Optionally, as shown in fig. 2, the method further includes:
and 103, sending the first rotation result to the display client.
The first rotation result is a result after the structural model and the navigation model are both rotated in the rotation direction and the rotation angle.
Optionally, as shown in fig. 3, before step 101, the method further includes:
104, receiving a display instruction sent by a display client; the display instruction carries an identification of the first object.
And 105, inquiring the navigation model and the structural model corresponding to the identification from the model library.
The model library stores models of various objects, each object has an identifier, and the identifier and the model have a corresponding relation so as to query the model.
And step 106, sending the navigation model and the structural model to the display client.
In addition to the above method of finding a model, the method may further include:
the device receives a model selection instruction sent by a display client; acquiring and sending a tree display graph in a model library to a display client, wherein the tree display graph displays the type of a model; the display client displays the tree display picture and receives a selection instruction acted on a target category by a user; the display client sends the selection instruction to the device, and the device acquires and sends the model reference image belonging to the target category to the display client according to the selection instruction; the model reference image is a planar image of the object; displaying the model reference image belonging to the target category on the client, and receiving a re-selection instruction input by a user and acting on the target model reference image; the display client sends the re-selection instruction to the device, and the device acquires a target model corresponding to the target model reference graph as the structural model according to the re-selection instruction; querying the navigation model according to the structural model; the device sends the navigation model and the structural model to the display client.
Optionally, after step 106, the method further includes:
receiving and displaying a target instruction sent by a client; when the target instruction is other control instructions for changing the structural model, corresponding control operation is carried out on the structural model; sending a control result to a display client; so that the display client displays the controlled structure model according to the control result.
Other control instructions here include a zoom instruction, a detail display instruction for displaying a specific structure of the first object, and the like. Here, the detail display instruction is an instruction generated by a user acting on a detail of the structural model. For example, the first object is a heart, and when the user acts on a blood vessel of the heart represented by the structural model, the detail display model is used to hide the structural model of the heart and to represent the structural model of the blood vessel.
In the parameter control model of the control model in this embodiment, the control parameters include the position, angle, focal length, and the like of the camera and the three-dimensional model, and the stereoscopic rendering engine performs real-time rendering on the picture of the model or the animation video according to the control parameters.
Optionally, after receiving the target instruction sent by the display client, the method further includes:
when the target instruction is an automatic playing instruction, controlling the structure model and the navigation model to rotate along a preset direction according to a preset angular speed, or acquiring an animation video, wherein the animation video comprises a plurality of frames of images for displaying the structure model and the navigation model at the same angle; wherein the angle in each frame of image is different; and sending the second rotation result or the animation video to the display client.
In this embodiment, when controlling the pause of the animation video, the structural model and the navigation model of the same angle in the frame of image at the pause can also be used as the structural model and the navigation model in step 102.
The display client may display the model in the form of ppt, and therefore, the ppt needs to be edited, that is, imported, before the model is implemented.
FIG. 4 is a flowchart illustrating a model processing method according to an exemplary embodiment, as shown in FIG. 4, for use in a model processing system including a control client and a display client, the method including the following:
step 201, the display client receives a display instruction input by a user.
Here, the display instruction carries an identification of the first object.
Step 202, the display client sends a display instruction to the control client.
Step 203, the control client queries the navigation model and the structural model corresponding to the identifier from the model library.
And step 204, controlling the client to send the navigation model and the structural model to the display client.
The first object exhibited by the structural model is different, and the contour model of the first object in the navigational model is different.
And step 205, displaying the structural model on the first window and displaying the navigation model on the second window by the display client.
The navigation model of the second object comprises a contour model of the second object and a contour model of a first object, and the first object is a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structure of the first object and the second object.
Step 206, the display client receives the target instruction input by the user.
And step 207, the display client sends a target instruction to the control client.
In step 208, the control client determines which instruction the target instruction is.
And step 209, when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the client to control the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle.
Step 210, the control client sends the first rotation result to the display client.
And step 211, the display client displays the rotated structure model and the rotated navigation model according to the first rotation result.
The embodiment can display and rotate the displayed structural model and the displayed navigation model, thereby realizing man-machine interaction and increasing the interaction between the model and people.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 5 is a block diagram illustrating a model processing device that may be implemented as part or all of a control client in software, hardware, or a combination of both, according to an example embodiment. As shown in fig. 5, the model processing apparatus includes:
a receiving module 301, configured to receive a target instruction input by a user;
a rotation module 302, configured to, when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, control the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structure of the first object and the second object.
The embodiment can rotate the displayed structural model and the navigation model, thereby realizing man-machine interaction and increasing the interaction between the model and people.
In one embodiment, as shown in fig. 6, the apparatus further comprises:
a sending module 303, configured to send the first rotation result to the display client; so that the display client displays the rotated structure model and the rotated navigation model according to the first rotation result.
In one embodiment, as shown in fig. 7, the apparatus further comprises:
the receiving module 301 is further configured to: receiving a display instruction sent by the display client; the display instruction carries an identifier of a first object;
a query module 304, configured to query the navigation model and the structure model corresponding to the identifier from a model library;
the sending module 303 is configured to send the navigation model and the structure model to the display client, so that the display client displays the structure model in a first window and displays the navigation model in a second window.
In one embodiment, as shown in fig. 8, the apparatus further comprises:
the receiving module 301 is further configured to: receiving a target instruction sent by the display client;
a control module 305, configured to perform a corresponding control operation on the structural model when the target instruction is another control instruction for changing the structural model;
the sending module 303 is further configured to: sending a control result to a display client; so that the display client displays the controlled structure model according to the control result.
In one embodiment, as shown in fig. 9, the apparatus further comprises:
the control module is further configured to: when the target instruction is an automatic playing instruction, controlling the structure model and the navigation model to rotate along a preset direction according to a preset angular speed;
an obtaining module 306, configured to obtain an animation video when the target instruction is an automatic playing instruction, where the animation video includes a plurality of frames of images displaying the structural model and the navigation model at the same angle; wherein the angle is different in each frame image;
the sending module 303 is further configured to: and sending the second rotation result or the animation video to a display client.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a model processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving a target instruction input by a user;
when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structure of the first object and the second object.
The processor may be further configured to:
the method further comprises the following steps:
sending the first rotation result to a display client; so that the display client displays the rotated structure model and the rotated navigation model according to the first rotation result.
Before the receiving of the target instruction input by the user, the method further comprises:
receiving a display instruction sent by the display client; the display instruction carries an identifier of a first object;
querying the navigation model and the structural model corresponding to the identifier from a model library;
and sending the navigation model and the structure model to the display client, so that the display client can display the structure model in a first window and display the navigation model in a second window.
After the sending the navigation model and the structure model to the display client, the method further comprises:
receiving a target instruction sent by the display client;
when the target instruction is other control instructions for changing the structure model, executing corresponding control operation on the structure model;
sending a control result to a display client; so that the display client displays the controlled structure model according to the control result.
After receiving the target instruction sent by the display client, the method further includes:
when the target instruction is an automatic playing instruction, controlling the structure model and the navigation model to rotate along a preset direction according to a preset angular speed; or, obtaining an animation video, wherein the animation video comprises a plurality of frames of images which display the structural model and the navigation model at the same angle; wherein the angle is different in each frame image;
and sending the second rotation result or the animation video to a display client.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating an apparatus for controlling a client, which is adapted to a terminal device, according to an exemplary embodiment. For example, control client 1700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
Control client 1700 may include one or more of the following components: processing component 1702, memory 1704, power component 1706, multimedia component 1708, audio component 1710, input/output (I/O) interface 1712, sensor component 1714, and communications component 1716.
The processing component 1702 generally controls the overall operation of the client 1700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 1702 may include one or more processors 1720 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 1702 may include one or more modules that facilitate interaction between processing component 1702 and other components. For example, processing component 1702 may include a multimedia module to facilitate interaction between multimedia component 1708 and processing component 1702.
Memory 1704 is configured to store various types of data to support operations at control client 1700. Examples of such data include instructions for any application or method operating on control client 1700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 1706 provides power to the various components of the control client 1700. The power components 1706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the control client 1700.
The multimedia component 1708 includes a screen providing an output interface between the control client 1700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1708 includes a front facing camera and/or a rear facing camera. When the control client 1700 is in an operation mode, such as a photographing mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1710 is configured to output and/or input audio signals. For example, audio component 1710 includes a Microphone (MIC) configured to receive external audio signals when control client 1700 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1704 or transmitted via the communication component 1716. In some embodiments, audio component 1710 also includes a speaker for outputting audio signals.
The I/O interface 1712 provides an interface between the processing component 1702 and peripheral interface modules, such as a keyboard, click wheel, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1714 includes one or more sensors for providing status assessment of various aspects to the control client 1700. For example, sensor assembly 1714 may detect an open/closed state of control client 1700, the relative positioning of components, such as a display and keypad of control client 1700, the change in position of control client 1700 or a component of control client 1700, the presence or absence of user contact with control client 1700, the orientation or acceleration/deceleration of control client 1700, and the change in temperature of control client 1700. The sensor assembly 1714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 1714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1716 is configured to facilitate controlling communications between the client 1700 and other devices in a wired or wireless manner. The control client 1700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the control client 1700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided that includes instructions, such as memory 1704 that are executable by processor 1720 of control client 1700 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer-readable storage medium in which instructions, when executed by a processor of a control client 1700, enable the control client 1700 to perform the model processing method described above, the method comprising:
receiving a target instruction input by a user;
when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structure of the first object and the second object.
The method further comprises the following steps:
sending the first rotation result to a display client; so that the display client displays the rotated structure model and the rotated navigation model according to the first rotation result.
Before the receiving of the target instruction input by the user, the method further comprises:
receiving a display instruction sent by the display client; the display instruction carries an identifier of a first object;
querying the navigation model and the structural model corresponding to the identifier from a model library;
and sending the navigation model and the structure model to the display client, so that the display client can display the structure model in a first window and display the navigation model in a second window.
After the sending the navigation model and the structure model to the display client, the method further comprises:
receiving a target instruction sent by the display client;
when the target instruction is other control instructions for changing the structure model, executing corresponding control operation on the structure model;
sending a control result to a display client; so that the display client displays the controlled structure model according to the control result.
After receiving the target instruction sent by the display client, the method further includes:
when the target instruction is an automatic playing instruction, controlling the structure model and the navigation model to rotate along a preset direction according to a preset angular speed; or, obtaining an animation video, wherein the animation video comprises a plurality of frames of images which display the structural model and the navigation model at the same angle; wherein the angle is different in each frame image;
and sending the second rotation result or the animation video to a display client.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (5)

1. A method of model processing, comprising:
receiving a display instruction sent by a display client; the display instruction carries an identifier of a first object;
inquiring a navigation model of the second object and a structure model of the first object corresponding to the identification from a model library;
sending the navigation model and the structure model to the display client, so that the display client can display the structure model in a first window and display the navigation model in a second window;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structures of the first object and the second object;
receiving a target instruction input by a user;
when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
sending the first rotation result to a display client so that the display client can display the rotated structure model and the rotated navigation model according to the first rotation result;
receiving a target instruction sent by the display client;
and when the target instruction is other control instructions for changing the structure model, executing corresponding control operation on the structure model, and sending a control result to a display client, so that the display client displays the controlled structure model according to the control result.
2. The method of claim 1, wherein after receiving the target instruction sent by the display client, the method further comprises:
when the target instruction is an automatic playing instruction, controlling the structure model and the navigation model to rotate along a preset direction according to a preset angular speed; or acquiring an animation video, wherein the animation video comprises a plurality of frames of images which display the structural model and the navigation model at the same angle; wherein the angle is different in each frame of image;
and sending the second rotation result or the animation video to a display client.
3. A model processing apparatus, comprising:
the receiving module is used for receiving a display instruction sent by the display client; the display instruction carries an identifier of a first object, is also used for receiving a target instruction input by a user, and is also used for receiving the target instruction sent by the display client;
the query module is used for querying a navigation model of the second object and a structure model of the first object corresponding to the identification from a model library;
a sending module, configured to send the navigation model and the structure model to the display client, so that the display client displays the structure model in a first window and displays the navigation model in a second window;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structures of the first object and the second object;
the rotation module is used for controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle when the target instruction is a manual rotation instruction carrying the rotation direction and the rotation angle;
the sending module is further used for sending the first rotation result to a display client so that the display client can display the rotated structure model and the rotated navigation model according to the first rotation result;
and the rotating module is further used for executing corresponding control operation on the structure model and sending a control result to a display client when the target instruction is other control instructions for changing the structure model, so that the display client displays the controlled structure model according to the control result.
4. A control client, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving a display instruction sent by a display client; the display instruction carries an identifier of a first object;
inquiring a navigation model of the second object and a structure model of the first object corresponding to the identification from a model library;
sending the navigation model and the structure model to the display client, so that the display client can display the structure model in a first window and display the navigation model in a second window;
wherein the navigation model of the second object comprises a contour model of the second object and a contour model of the first object, the first object being a component of the second object; the contour model of the first object is arranged in the contour model of the second object according to the actual structures of the first object and the second object;
receiving a target instruction input by a user;
when the target instruction is a manual rotation instruction carrying a rotation direction and a rotation angle, controlling the displayed structure model of the first object and the displayed navigation model of the second object to rotate according to the rotation direction and the rotation angle;
sending the first rotation result to a display client so that the display client can display the rotated structure model and the rotated navigation model according to the first rotation result;
receiving a target instruction sent by the display client;
and when the target instruction is other control instructions for changing the structure model, executing corresponding control operation on the structure model, and sending a control result to a display client, so that the display client displays the controlled structure model according to the control result.
5. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-2.
CN201811300475.7A 2018-11-02 2018-11-02 Model processing method and device, control client and storage medium Active CN109407942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811300475.7A CN109407942B (en) 2018-11-02 2018-11-02 Model processing method and device, control client and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811300475.7A CN109407942B (en) 2018-11-02 2018-11-02 Model processing method and device, control client and storage medium

Publications (2)

Publication Number Publication Date
CN109407942A CN109407942A (en) 2019-03-01
CN109407942B true CN109407942B (en) 2022-05-06

Family

ID=65471025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811300475.7A Active CN109407942B (en) 2018-11-02 2018-11-02 Model processing method and device, control client and storage medium

Country Status (1)

Country Link
CN (1) CN109407942B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446352B (en) * 2018-11-07 2022-05-27 深圳市天诺泰安健康信息技术有限公司 Model display method, device, client and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122561A (en) * 2017-05-08 2017-09-01 王征 It is a kind of based on system and method for the 3D model binding relationship storehouses to multiple 3D models bindings
CN107329669A (en) * 2017-06-22 2017-11-07 青岛海信医疗设备股份有限公司 The method and device of the sub- organ model of human body is selected in human medical threedimensional model
CN107885432A (en) * 2017-11-24 2018-04-06 杭州荔宝信息技术有限公司 A kind of fetus three-dimension virtual reality man-machine interaction method and system
CN108295472A (en) * 2017-12-28 2018-07-20 深圳市创梦天地科技股份有限公司 A kind of joining method and terminal of built-up pattern

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7190365B2 (en) * 2001-09-06 2007-03-13 Schlumberger Technology Corporation Method for navigating in a multi-scale three-dimensional scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122561A (en) * 2017-05-08 2017-09-01 王征 It is a kind of based on system and method for the 3D model binding relationship storehouses to multiple 3D models bindings
CN107329669A (en) * 2017-06-22 2017-11-07 青岛海信医疗设备股份有限公司 The method and device of the sub- organ model of human body is selected in human medical threedimensional model
CN107885432A (en) * 2017-11-24 2018-04-06 杭州荔宝信息技术有限公司 A kind of fetus three-dimension virtual reality man-machine interaction method and system
CN108295472A (en) * 2017-12-28 2018-07-20 深圳市创梦天地科技股份有限公司 A kind of joining method and terminal of built-up pattern

Also Published As

Publication number Publication date
CN109407942A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
EP3540571B1 (en) Method and device for editing virtual scene, and non-transitory computer-readable storage medium
US20190221041A1 (en) Method and apparatus for synthesizing virtual and real objects
EP2927787B1 (en) Method and device for displaying picture
CN110889382A (en) Virtual image rendering method and device, electronic equipment and storage medium
EP2988205A1 (en) Method and device for transmitting image
CN111610912B (en) Application display method, application display device and storage medium
CN107798309B (en) Fingerprint input method and device and computer readable storage medium
CN110490164B (en) Method, device, equipment and medium for generating virtual expression
EP3147802A1 (en) Method and apparatus for processing information
CN111612876A (en) Expression generation method and device and storage medium
CN111611034A (en) Screen display adjusting method and device and storage medium
CN108346179B (en) AR equipment display method and device
CN107437269B (en) Method and device for processing picture
CN112882784A (en) Application interface display method and device, intelligent equipment and medium
CN109407942B (en) Model processing method and device, control client and storage medium
CN109255839B (en) Scene adjustment method and device
CN108829473B (en) Event response method, device and storage medium
CN111373730B (en) Panoramic shooting method and terminal
WO2022262211A1 (en) Content processing method and apparatus
CN106919332B (en) Information transmission method and equipment
CN112764658B (en) Content display method and device and storage medium
CN114296587A (en) Cursor control method and device, electronic equipment and storage medium
CN108769513B (en) Camera photographing method and device
CN111862288A (en) Pose rendering method, device and medium
CN106598217B (en) Display method, display device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant