WO2024021036A1 - 模型控制方法、装置、设备、系统以及计算机存储介质 - Google Patents

模型控制方法、装置、设备、系统以及计算机存储介质 Download PDF

Info

Publication number
WO2024021036A1
WO2024021036A1 PCT/CN2022/109021 CN2022109021W WO2024021036A1 WO 2024021036 A1 WO2024021036 A1 WO 2024021036A1 CN 2022109021 W CN2022109021 W CN 2022109021W WO 2024021036 A1 WO2024021036 A1 WO 2024021036A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
terminal
control
server
video data
Prior art date
Application number
PCT/CN2022/109021
Other languages
English (en)
French (fr)
Inventor
李存青
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2022/109021 priority Critical patent/WO2024021036A1/zh
Priority to CN202280002449.4A priority patent/CN117813579A/zh
Publication of WO2024021036A1 publication Critical patent/WO2024021036A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]

Definitions

  • This application relates to the field of information technology, and in particular to a model control method, device, equipment, system and computer storage medium.
  • the terminal runs the model (such as a house model or a vehicle model), and displays the running process of the model on the display interface. Then it receives the user's control instructions and adjusts the model based on the control instructions. Users can see the adjusted model through the display interface.
  • the model such as a house model or a vehicle model
  • the operation of the model may be laggy, resulting in lower control efficiency of the above model control method.
  • Embodiments of the present application provide a model control method, device, equipment, system and computer storage medium.
  • the technical solutions are as follows:
  • a model control method is provided and applied in a terminal.
  • the method includes:
  • the obtaining control information for the video data includes:
  • the control mode at least includes a mouse control mode and a touch control mode.
  • Each of the control modes includes a correspondence between operation information and control information;
  • the operation information is converted into the control information based on the corresponding relationship included in the current control mode of the terminal.
  • control information includes at least two kinds of control instructions,
  • Converting the operation information into the control information based on the conversion relationship corresponding to the current control mode of the terminal includes:
  • the operation information is converted into the second control instruction.
  • control information includes at least two control instructions of click, double-click, perspective translation, perspective zoom, and perspective change.
  • determining the current control mode of the terminal includes:
  • the current control mode of the terminal is the mouse control mode.
  • the receiving and displaying the video data of the model provided by the server includes:
  • a model control method is provided and applied in a server.
  • the method includes:
  • the video data of the adjusted model is sent to the terminal.
  • the model is a three-dimensional model
  • the running model includes:
  • the terminal includes at least a first terminal and a second terminal,
  • the adjusting the model based on the control information includes:
  • the model is adjusted based on the first control information provided by the first terminal and the second control information provided by the second terminal.
  • the first terminal and the second terminal are terminals with different operating systems.
  • a model control device includes:
  • the first display module is used to receive and display the video data of the model provided by the server;
  • a control information acquisition module used to acquire control information for the video data
  • a first sending module configured to send the control information to the server, where the server is configured to adjust the model based on the control information
  • the second display model is used to receive and display the video data of the adjusted model provided by the server.
  • a model control device includes:
  • Model running module used to run the model
  • a first video acquisition module used to acquire video data of the model
  • the second sending module is used to send the video data to the terminal
  • Yang Hongyu receives the control information provided by the terminal for the model
  • an adjustment module configured to adjust the model based on the control information
  • the second video acquisition module is used to acquire the video data of the adjusted model
  • the second sending module is configured to send the video data of the adjusted model to the terminal.
  • a model control system includes: a terminal and a server;
  • the server obtains the video data of the model
  • the server sends the video data to the terminal
  • the terminal receives and displays the video data of the model provided by the server;
  • the terminal obtains control information for the video data
  • the terminal sends the control information to the server;
  • the server receives the control information provided by the terminal for the model
  • the server adjusts the model based on the control information
  • the server obtains the video data of the adjusted model
  • the server sends the video data of the adjusted model to the terminal;
  • the terminal receives and displays the video data of the adjusted model provided by the server.
  • a model control device includes a processor and a memory.
  • the memory stores at least one instruction, at least a program, a code set or an instruction set, and the At least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to implement the above model control method.
  • a non-transitory computer storage medium stores at least one instruction, at least a program, a code set or an instruction set, and the at least one instruction , the at least one program, the code set or the instruction set are loaded and executed by the processor to implement the above model control method.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned model control method.
  • the server provides the video data of the model to the terminal, and the terminal sends the control information for the video data to the server, so that the server adjusts the model, and sends the adjusted video data of the model to the terminal, and the terminal displays it, so
  • the model is run by the server and will not be subject to the function of the terminal, which can solve the low control efficiency of the model control method in related technologies. problem, achieving the effect of improving the control efficiency of the model control method.
  • Figure 1 is a schematic structural diagram of an application scenario applied in the embodiment of the present application.
  • Figure 2 is a method flow chart of a model control method provided by an embodiment of the present application.
  • Figure 3 is a method flow chart of another model control method provided by an embodiment of the present application.
  • Figure 4 is a method flow chart of another model control method provided by an embodiment of the present application.
  • Figure 5 is an architectural schematic diagram of a player in an embodiment of the present application.
  • Figure 6 is a schematic flowchart of obtaining control information for video data provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of a display interface of a terminal in an embodiment of the present application.
  • Figure 8 is an architectural schematic diagram of the model control method provided by the embodiment of the present application.
  • Figure 9 is a block diagram of a model control device provided by an embodiment of the present application.
  • Figure 10 is a block diagram of another model control device provided by an embodiment of the present application.
  • Figure 11 is a block diagram of a model control system provided by an embodiment of the present application.
  • One way to display information to users is to build a model based on the information to be displayed and display the model to the user, and the user can control the model, such as selecting, zooming the perspective, panning the perspective, etc.
  • the user can use the model to gain a more comprehensive view and clearly obtain relevant information.
  • the operation of the model requires a large amount of computing power. If the computing power of the device running the model is weak, the operation of the model will be laggy, which greatly affects the user experience.
  • the embodiments of the present application provide a model control method, device, equipment, system and computer storage medium, which can solve some of the problems in the above related technologies.
  • FIG. 1 is a schematic structural diagram of an application scenario applied in the embodiment of the present application.
  • the application scenario may include a terminal 11 and a server 12 .
  • the terminal 11 can establish a wired connection or a wireless connection with the server 12 .
  • the terminal 11 may include various terminals such as smartphones, tablet computers, smart wearable devices, desktop computers, notebook computers, etc.
  • the number of terminals 11 may be multiple.
  • FIG. 1 shows a case where the number of terminals 11 is three, but this is not limited.
  • the terminal 11 can run an application program, and the application program can be built using Flutter (an open source toolkit for building user interfaces) technology.
  • the application program is used to obtain video data from the server 12 and send control information (such as control signaling, etc.) to the server 12 .
  • the server 12 may include a server or a server cluster, and the server 12 may have powerful data processing capabilities.
  • the server 12 may have components capable of running various models, such as UE model plug-ins, etc.
  • the server 12 can also have a video stream generation component, which can generate video data (video stream) in Real Time Messaging Protocol (Real Time Messaging Protocol, RTMP) or Real Time Streaming Protocol (Real Time Streaming Protocol, RTSP) format.
  • RTMP Real Time Messaging Protocol
  • RTSP Real Time Streaming Protocol
  • the model control method provided by the embodiment of the present application can be used to display a three-dimensional model including multiple buildings (building groups) to the user.
  • the building group can be a residential building, a commercial block, an office building group, Ancient building groups or city models, etc.
  • the model control method provided by the embodiment of the present application can be used to display three-dimensional models including consumer products to users.
  • the consumer products can include mobile phones, tablet computers, smart wearable devices, desktop computers, notebook computers, Cars, bicycles, motorcycles, etc.
  • users can conveniently and quickly learn the information to be displayed in the three-dimensional models of these consumer products, such as the appearance from various angles, patterns in specific details, and so on.
  • Figure 2 is a method flow chart of a model control method provided by an embodiment of the present application.
  • the method can be applied to the terminal in the implementation environment shown in Figure 1.
  • the method can include the following steps:
  • Step 201 Receive and display the video data of the model provided by the server.
  • Step 202 Obtain control information for video data.
  • Step 203 Send the control information to the server, and the server is configured to adjust the model based on the control information.
  • Step 204 Receive and display the video data of the adjusted model provided by the server.
  • the model control method provided by the embodiment of the present application provides the video data of the model to the terminal through the server, and the terminal sends the control information for the video data to the server, so that the server adjusts the model and adjusts the model.
  • the video data of the model is sent to the terminal and displayed by the terminal.
  • the control method of the server running the model and adjusting the model, terminal display and controlling the model is realized, and in this method the model is run by the server and will not be subject to the terminal.
  • the function can solve the problem of low control efficiency of the model control method in related technologies, and achieve the effect of improving the control efficiency of the model control method.
  • Figure 3 is a method flow chart of another model control method provided by an embodiment of the present application. This method can be applied to the server in the implementation environment shown in Figure 1. This method can include the following steps:
  • Step 301 Run the model.
  • Step 302 Obtain the video data of the model.
  • Step 303 Send video data to the terminal.
  • Step 304 Receive model-specific control information provided by the terminal.
  • Step 305 Adjust the model based on the control information.
  • Step 306 Obtain the video data of the adjusted model.
  • Step 307 Send the adjusted model video data to the terminal.
  • the model control method provided by the embodiment of the present application provides the video data of the model to the terminal through the server, and the terminal sends the control information for the video data to the server, so that the server adjusts the model and adjusts the model.
  • the video data of the model is sent to the terminal and displayed by the terminal.
  • the control method of the server running the model and adjusting the model, terminal display and controlling the model is realized, and in this method the model is run by the server and will not be subject to the terminal.
  • the function can solve the problem of low control efficiency of the model control method in related technologies, and achieve the effect of improving the control efficiency of the model control method.
  • Figure 4 is a method flow chart of another model control method provided by an embodiment of the present application. This method can be applied in the implementation environment shown in Figure 1. This method can include the following steps:
  • Step 401 The server runs the model.
  • the model can be run by the server, and the model can be preset in the server.
  • the server can run the model through a three-dimensional model running component (such as a UE model plug-in). After the model is run, it can be adjusted under the control of control information to facilitate observation.
  • Step 402 The server obtains the video data of the model.
  • the server can start collecting video data of the model, and the video data can be in the form of a video stream.
  • the server may have a video stream generation component, and the server may use the video stream generation component to generate video data in a real-time messaging protocol format or video data in a real-time streaming protocol format.
  • Step 403 The server sends video data to the terminal.
  • the server can send video data of the model to the terminal through a wireless connection or a wired connection with the terminal.
  • Step 404 The terminal receives and displays the video data of the model provided by the server.
  • the terminal After receiving the video data of the model sent by the server, the terminal can display the video data of the model on the display interface.
  • the actions performed by the terminal may include:
  • the terminal plays the video data of the model through the local player component. This is just a video playback action. It has low requirements on the computing power of the terminal.
  • the terminal can play the video data of the model smoothly.
  • Figure 5 is a schematic diagram of the architecture of a player in an embodiment of the present application.
  • the player of the terminal can be divided into upper and lower layers.
  • the lower layer s2 is a player capable of playing RTMP/RTSP video streams.
  • the upper layer s2 is a player capable of playing RTMP/RTSP video streams.
  • s1 can be the interactive gesture capture layer.
  • the interactive gesture capture layer is the upper layer and the player is the lower layer.
  • the pseudo code can be:
  • Stack is Flutter's own hierarchical component. Multiple sub-components can be placed in Stack. The sub-component PlayerWidget placed first is on the lower layer, and the sub-component CaptureWidget placed later is on the upper layer.
  • the terminal can play video data through the video playback component (such as PlayerWidget) in Flutter, and obtain the user's operation information through the touch component (such as CaptureWidget) in Flutter.
  • the video playback component such as PlayerWidget
  • CaptureWidget CaptureWidget
  • Step 405 The terminal obtains control information for video data.
  • This control information can be used to adjust the video data.
  • Step 405 may include the following sub-steps:
  • Sub-step 4051 Obtain operation information.
  • the operation information is information generated by the user operating the terminal. Based on different control models, the terminal can obtain the operation information in different ways.
  • the way for the terminal to obtain the operation information includes at least obtaining it through a mouse and obtaining it through a touch screen (or touch pad).
  • Sub-step 4052 Determine the current control mode of the terminal.
  • the control mode at least includes a mouse control mode and a touch control mode, and each control mode includes a correspondence between operation information and control information.
  • Sub-step 4052 includes at least the following execution actions:
  • the designated area may refer to a designated area in the display interface of the terminal, and the designated area may be related to the video data of the displayed model.
  • the designated area may be the area where the model is located on the display interface, or the display panel.
  • An edge area where the user can determine the control mode by operating in this designated area.
  • FIG. 7 which is a schematic diagram of a display interface of a terminal in an embodiment of the present application.
  • the display interface 71 displays a model A, which is a three-dimensional model of a building, and the designated area may be the area occupied by the model A in the display interface 71 .
  • the terminal can determine whether the location corresponding to the user's operation information belongs to the designated area.
  • the terminal may determine that the current control mode is the touch control mode.
  • the terminal can determine that the current control mode is Touch control mode.
  • the terminal may determine that the current control mode is the mouse control mode.
  • the terminal can determine that the current control mode is mouse control. model.
  • the terminal can be controlled through a mouse and a touch screen at the same time (for example, the terminal is a notebook computer with a touch screen), and the user can operate the terminal through at least one of the control methods, or can select one of the control methods.
  • a control method to operate the terminal for example, the terminal is a notebook computer with a touch screen
  • CaptureWidget is a gesture capture component, and its subcomponent is MouseRegion, which is used to detect the mouse.
  • MouseRegion the subcomponent of MouseRegion
  • the subcomponent of MouseRegion is a Container, and the size of the Container can be set to specify the size of the entire gesture capture area.
  • Sub-step 4053 Convert the operation information into control information based on the corresponding relationship included in the current control mode of the terminal.
  • Table 1 shows the operation information of the mouse control mode (which can be a control mode implemented in an operating system such as Windows/Mac/Linux) and the operation information of the touch control mode, and the corresponding control information.
  • the user In mouse control mode the operation information of clicking the left mouse button can correspond to the control information of the click.
  • the control information of the click can correspond to the predetermined control method, such as selecting a model and highlighting the edge of the selected model. wait.
  • control information includes at least two control instructions.
  • each control information can be a control instruction.
  • Sub-step 4053 may include the following execution processes:
  • the terminal can compare the operation information with each control instruction in sequence through a loop judgment method. For example, when the operation information received by the terminal in the mouse control mode is "click the left mouse button", Then according to the correspondence relationship recorded in Table 1, from top to bottom, first determine whether the operation information "click the left mouse button” corresponds to the first control instruction "click”. It can be seen that the first control instruction "click” The corresponding operation information is “click the left mouse button”, which indicates that the operation information "drag the left mouse button” does not correspond to the first control instruction "click”.
  • the terminal can convert the operation information into the first control instruction.
  • the operation information received by the terminal in the mouse control mode is "drag the left mouse button”, then according to the corresponding relationship recorded in Table 1, from top to bottom, first determine the operation information "drag the left mouse button” "Whether it corresponds to the first control instruction "click”, it can be seen that the operation information corresponding to the first control instruction "click” is “click the left mouse button”, which means that the operation information "click the left mouse button” is consistent with Corresponding to the first control instruction "click”, the terminal can convert the operation information "click the left mouse button” into the first control instruction "click”.
  • the terminal can continue to determine whether the operation information corresponds to the second control instruction of at least two types of control instructions.
  • the determination method can refer to the above-mentioned determination of whether the operation information corresponds to the at least two control instructions. The process corresponding to the first control instruction in the instruction will not be described again in the embodiment of the present application.
  • control information includes n types of control instructions
  • the terminal can compare the operation information with the n types of control instructions in sequence until the comparison ends successfully.
  • the terminal may display prompt information on the operation interface to remind the user that the operation information does not match, or the terminal may not respond.
  • This signaling protocol is used to define the format of control information transmitted between the terminal and the server.
  • Flutter itself provides some basic gesture capture components, such as GestureDector.
  • GestureDector can detect single clicks, double clicks, etc., but it is not flexible enough and may be difficult to apply to specific control of 3D models (such as changing perspective). and zooming of the viewing angle, etc.), and cannot be flexibly applied to different control modes.
  • various control information is redefined to facilitate application in the control scenario of the three-dimensional model, thereby improving the applicability of the model control method provided by the embodiment of the present application.
  • Step 406 The terminal sends the control information to the server.
  • the terminal can send the control information to the server through a wireless connection or a wired connection with the server, so that the server can control the model running in the server based on the control information.
  • the terminal can encapsulate control information in JSON format and transmit it to the server through network protocols (such as TCP, UDP, WebSocket, MQTT, HTTP, etc.).
  • the terminals involved in the embodiment of the present application may include multiple terminals, and each of these multiple terminals may send control information to the server through steps 401 to 406.
  • the server can receive control information provided by the terminal for the model running in the server.
  • the terminal includes at least a first terminal and a second terminal, and what the server receives is the first control information for the model provided by the first terminal and the first control information for the model provided by the second terminal. Second control information.
  • the server may receive the first control information and the second control information successively, or receive the first control information and the second control information at the same time, which is not limited in this embodiment of the present application.
  • an application can be run in the terminal.
  • the application is implemented using the cross-platform Flutter technology and can be applied to various platforms such as Windows, Android, iOS, and PC browsers.
  • the application using Flutter technology can display the display interface shown in Figure 7, and the user can input control information for the model in the display interface of the application.
  • Step 407 The server adjusts the model based on the control information.
  • the server can adjust the model based on the control information provided by the terminal.
  • the server may based on the first control information provided by the first terminal and the third control information provided by the second terminal. 2. Control information to adjust the model.
  • the server can adjust the model sequentially based on the order of received control information. For example, if the server first receives the first control information sent by the first terminal, then the server can first adjust the model based on the order of the received control information. The first control information adjusts the model, and after receiving the second control information sent by the second terminal, the server continues to adjust the model based on the second control information.
  • the server in another exemplary embodiment, in a scenario where multiple terminals jointly control the model in the server, the server can adjust the model respectively through steps 407 to 409.
  • Step 408 The server obtains the video data of the adjusted model.
  • the server may obtain video data of the adjusted model.
  • Step 409 The server sends the adjusted model video data to the terminal.
  • the server can send video data of the model to the terminal through a wireless connection or a wired connection with the terminal.
  • the server can send the adjusted video data of the model to multiple terminals.
  • Step 410 The terminal receives and displays the adjusted model video data provided by the server.
  • the terminal After receiving the adjusted model video data provided by the server, the terminal can display the video data on the display interface. In this way, the function of displaying and adjusting the model on the terminal side is realized, and the terminal does not need to directly run and adjust the model, which greatly reduces the functional requirements for the terminal, facilitates the display and control of complex models on the terminal, and improves user experience. experience.
  • the server sends the video data of the adjusted model to multiple terminals, then multiple terminals can see the same video data of the adjusted model. In this way, the function of displaying the model and controlling the model to multiple terminals at the same time can be realized. .
  • the model control method provided by the embodiment of the present application provides the video data of the model to the terminal through the server, and the terminal sends the control information for the video data to the server, so that the server adjusts the model and adjusts the model.
  • the video data of the model is sent to the terminal and displayed by the terminal.
  • the control method of the server running the model and adjusting the model, terminal display and controlling the model is realized, and in this method the model is run by the server and will not be subject to the terminal.
  • the function can solve the problem of low control efficiency of the model control method in related technologies, and achieve the effect of improving the control efficiency of the model control method.
  • Figure 8 is an architectural schematic diagram of a model control method provided by an embodiment of the present application.
  • the server includes a UE model plug-in, a UE model to video stream module, a streaming server module, a signaling receiver, and a terminal. It may include a gesture capture module in the player, a streaming media player module, a signaling protocol conversion module and a signaling sending module.
  • the UE model plug-in in the server can run the model, and the UE model-to-video stream module collects the video stream, and then the push service module pushes the video stream to the terminal, and the terminal passes
  • the streaming media player plays the video stream and obtains the user's control information through the gesture capture module, and then converts the control information into signaling in a predetermined format through the signaling protocol conversion module, and sends it to the server by the signaling transmitter.
  • the signaling receiver receives the signaling and sends the signaling to the UE model plug-in to adjust the model.
  • Figure 9 is a block diagram of a model control device provided by an embodiment of the present application.
  • the model control device can be partially or fully integrated in the terminal in the implementation environment shown in Figure 1.
  • the model control device 900 include:
  • the first display module 910 is used to receive and display the video data of the model provided by the server;
  • the control information acquisition module 920 is used to acquire control information for video data
  • the first sending module 930 is used to send control information to the server, and the server is configured to adjust the model based on the control information;
  • the second display model 940 is used to receive and display the video data of the adjusted model provided by the server.
  • the model control device provided by the embodiment of the present application provides the video data of the model to the terminal through the server, and the terminal sends the control information for the video data to the server, so that the server adjusts the model and sends the adjusted
  • the video data of the model is sent to the terminal and displayed by the terminal.
  • the control method of the server running the model and adjusting the model, terminal display and controlling the model is realized, and in this method the model is run by the server and will not be subject to the terminal.
  • the function can solve the problem of low control efficiency of the model control method in related technologies, and achieve the effect of improving the control efficiency of the model control method.
  • Figure 10 is a block diagram of another model control device provided by an embodiment of the present application.
  • This model control device can be partially or fully integrated in the server in the implementation environment shown in Figure 1.
  • the model control device 1000 includes:
  • Model running module 1010 used to run the model
  • the first video acquisition module 1020 is used to acquire video data of the model
  • the second sending module 1030 is used to send video data to the terminal
  • Adjustment module 1050 used to adjust the model based on control information
  • the second video acquisition module 1060 is used to acquire the video data of the adjusted model
  • the second sending module 1070 is configured to send the adjusted model video data to the terminal.
  • the model control device provided by the embodiment of the present application provides the video data of the model to the terminal through the server, and the terminal sends the control information for the video data to the server, so that the server adjusts the model and sends the adjusted
  • the video data of the model is sent to the terminal and displayed by the terminal.
  • the control method of the server running the model and adjusting the model, terminal display and controlling the model is realized, and in this method the model is run by the server and will not be subject to the terminal.
  • the function can solve the problem of low control efficiency of the model control method in related technologies, and achieve the effect of improving the control efficiency of the model control method.
  • FIG 11 is a block diagram of a model control system provided by an embodiment of the present application.
  • the model control system includes: a terminal 1110 and a server 1120.
  • Server 1120 runs the model.
  • the server 1120 obtains the video data of the model.
  • the server 1120 sends the video data to the terminal 1110.
  • the terminal 1110 receives and displays the video data of the model provided by the server 1120 .
  • the terminal 1110 obtains control information for the video data.
  • the terminal 1110 sends the control information to the server 1120.
  • the server 1120 receives the control information provided by the terminal 1110 for the model.
  • Server 1120 adjusts the model based on the control information.
  • the server 1120 obtains the video data of the adjusted model.
  • the server 1120 sends the adjusted video data of the model to the terminal 1110 .
  • the terminal 1110 receives and displays the video data of the adjusted model provided by the server 1120 .
  • the model control system provided by the embodiment of the present application provides the video data of the model to the terminal through the server, and the terminal sends the control information for the video data to the server, so that the server adjusts the model and sends the adjusted
  • the video data of the model is sent to the terminal and displayed by the terminal.
  • the control method of the server running the model and adjusting the model, terminal display and controlling the model is realized, and in this method the model is run by the server and will not be subject to the terminal.
  • the function can solve the problem of low control efficiency of the model control method in related technologies, and achieve the effect of improving the control efficiency of the model control method.
  • inventions of the present application also provide a model control device.
  • the model control device includes a processor and a memory.
  • the memory stores at least one instruction, at least one program, a code set or an instruction set, and at least one instruction, at least one program, and code.
  • a set or set of instructions is loaded and executed by the processor to implement the model control method as described above.
  • Embodiments of the present application also provide a non-transitory computer storage medium, which stores at least one instruction, at least a program, a code set or an instruction set, at least one instruction, at least a program, a code set or a set of instructions.
  • the instruction set is loaded and executed by the processor to implement the model control method as described above.
  • Embodiments of the present application also provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the above-mentioned model control method.
  • At least one of A and B in this application is only an association relationship describing associated objects, indicating that there can be three relationships.
  • at least one of A and B can mean: A alone exists, and at the same time
  • at least one of A, B and C means that seven relationships can exist, which can mean: A exists alone, B exists alone, C exists alone, A and B exist at the same time, A and C exist at the same time, and both exist at the same time.
  • C and B seven situations A, B and C exist at the same time.
  • At least one of A, B, C and D means that fifteen relationships can exist, which can mean: A exists alone, B exists alone, C exists alone, D exists alone, A and B exist simultaneously, and at the same time A and C exist, A and D exist simultaneously, C and B exist simultaneously, D and B exist simultaneously, C and D exist simultaneously, A, B and C exist simultaneously, A, B and D exist simultaneously, A and C exist simultaneously and D, B, C and D exist at the same time, and A, B, C and D exist at the same time, these are fifteen situations.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请公开了一种模型控制方法、装置、设备、系统以及计算机存储介质,属于信息技术领域。方法包括:通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。

Description

模型控制方法、装置、设备、系统以及计算机存储介质 技术领域
本申请涉及信息技术领域,特别涉及一种模型控制方法、装置、设备、系统以及计算机存储介质。
背景技术
随着技术的发展,各种信息的展示方式层出不穷,通过可以控制的模型来展示信息是一种较为直观的展示方式。
目前的一种模型控制方法中,由终端运行模型(如房屋模型或车辆模型),并在显示界面展示模型的运行过程,之后接收用户的控制指令,并基于该控制指令来对模型进行调整,用户可以通过显示界面看到调整后的模型。
但是,若模型较为复杂时,受限于终端的机能,可能模型的运行较为卡顿,导致上述模型控制方法的控制效率较低。
发明内容
本申请实施例提供了一种模型控制方法、装置、设备、系统以及计算机存储介质。所述技术方案如下:
根据本申请实施例的一方面,提供一种模型控制方法,应用于终端中,所述方法包括:
接收并展示服务器提供的模型的视频数据;
获取针对所述视频数据的控制信息;
将所述控制信息发送至所述服务器,所述服务器被配置为基于所述控制信息对所述模型进行调节;
接收并展示所述服务器提供的调节后的模型的视频数据。
可选地,所述获取针对所述视频数据的控制信息,包括:
获取操作信息;
确定所述终端当前的控制模式,所述控制模式至少包括鼠标控制模式以及触摸控制模式,每种所述控制模式均包括操作信息与控制信息的对应关系;
以所述终端当前的控制模式包括的对应关系,将所述操作信息转换为所述控制信息。
可选地,所述控制信息至少包括两种控制指令,
所述以所述终端当前的控制模式对应的转换关系,将所述操作信息转换为所述控制信息,包括:
基于所述终端当前的控制模式对应的对应关系,判断所述操作信息是否与所述至少两种控制指令中的第一控制指令对应;
响应于所述操作信息与所述第一控制指令对应,则将所述操作信息转换为所述第一控制指令;
响应于所述操作信息与所述第一控制指令不对应,判断所述操作信息是否与所述至少两种控制指令中的第二控制指令对应;
响应于所述操作信息与所述第二控制指令对应,则将所述操作信息转换为所述第二控制指令。
可选地,所述控制信息至少包括单击、双击、视角平移、视角缩放以及视角改变中的至少两种控制指令。
可选地,所述确定所述终端当前的控制模式,包括:
判断所述操作信息对应的位置是否属于指定区域;
响应于所述操作信息对应的位置不属于所述指定区域,确定所述终端当前的控制模式为所述触摸控制模式;
响应于所述操作信息对应的位置属于所述指定区域,确定所述终端当前的控制模式为所述鼠标控制模式。
可选地,所述接收并展示服务器提供的模型的视频数据,包括:
接收所述服务器提供的所述模型的视频数据;
通过本地的播放器组件播放所述模型的视频数据。
根据本申请实施例的另一方面,提供一种模型控制方法,应用于服务器中,所述方法包括:
运行模型;
获取所述模型的视频数据;
向终端发送所述视频数据;
接收所述终端提供的针对所述模型的控制信息;
基于所述控制信息对所述模型进行调节;
获取调节后的模型的视频数据;
向所述终端发送所述调节后的模型的视频数据。
可选地,所述模型为三维模型,所述运行模型,包括:
通过三维模型运行组件运行所述模型。
可选地,所述终端至少包括第一终端以及第二终端,
所述基于所述控制信息对所述模型进行调节,包括:
基于所述第一终端提供的第一控制信息,以及所述第二终端提供的第二控制信息,对所述模型进行调节。
可选地,所述第一终端和所述第二终端为不同操作系统的终端。
根据本申请实施例的另一方面,提供一种模型控制装置,所述模型控制装置包括:
第一展示模块,用于接收并展示服务器提供的模型的视频数据;
控制信息获取模块,用于获取针对所述视频数据的控制信息;
第一发送模块,用于将所述控制信息发送至所述服务器,所述服务器被配置为基于所述控制信息对所述模型进行调节;
第二展示模型,用于接收并展示所述服务器提供的调节后的模型的视频数据。
根据本申请实施例的另一方面,提供一种模型控制装置,所述模型控制装置包括:
模型运行模块,用于运行模型;
第一视频获取模块,用于获取所述模型的视频数据;
第二发送模块,用于向终端发送所述视频数据;
信息接收模块,杨红玉接收所述终端提供的针对所述模型的控制信息;
调节模块,用于基于所述控制信息对所述模型进行调节;
第二视频获取模块,用于获取调节后的模型的视频数据;
第二发送模块,用于向所述终端发送所述调节后的模型的视频数据。
根据本申请实施例的另一方面,提供一种模型控制系统,所述模型控制系统包括:终端以及服务器;
服务器运行模型;
服务器获取所述模型的视频数据;
服务器向终端发送所述视频数据;
终端接收并展示服务器提供的模型的视频数据;
终端获取针对所述视频数据的控制信息;
终端将所述控制信息发送至所述服务器;
服务器接收所述终端提供的针对所述模型的控制信息;
服务器基于所述控制信息对所述模型进行调节;
服务器获取调节后的模型的视频数据;
服务器向所述终端发送所述调节后的模型的视频数据;
终端接收并展示所述服务器提供的调节后的模型的视频数据。
根据本申请实施例的另一方面,提供一种模型控制设备,所述模型控制设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上述的模型控制方法。
根据本申请实施例的另一方面,提供一种非瞬态计算机存储介质,所述非瞬态计算机存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上述的模型控制方法。
根据本申请实施例的另一方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述的模型控制方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
且用户在终端即可以流畅的观察以及控制模型,用户体验较好。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例所应用的一种应用场景的结构示意图;
图2是本申请实施例提供的一种模型控制方法的方法流程图;
图3是本申请实施例提供的另一种模型控制方法的方法流程图;
图4是本申请实施例提供的另一种模型控制方法的方法流程图;
图5是本申请实施例中一种播放器的架构示意图;
图6是本申请实施例提供的一种获取针对视频数据的控制信息的流程示意图;
图7是本申请实施例中一种终端的显示界面的示意图;
图8是本申请实施例提供的模型控制方法的一种架构示意图;
图9是本申请实施例提供的一种模型控制装置的框图;
图10是本申请实施例提供的另一种模型控制装置的框图;
图11是本申请实施例提供的一种模型控制系统的框图。
通过上述附图,已示出本申请明确的实施例,后文中将有更详细的描述。这些附图和文字描述并不是为了通过任何方式限制本申请构思的范围,而是通过参考特定实施例为本领域技术人员说明本申请的概念。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
一种向用户展示信息的方式是基于要展示的信息建立模型,并向用户展示该模型,且用户可以对该模型进行控制,如选择、缩放视角、平移视角等,用户可以通过该模型较为全面且清晰的获取到相关的信息。
但是,模型的运行对于计算能力的要求较大,若运行模型的设备的计算能力较弱,则会导致模型的运行较为卡顿,大大影响了用户体验。
本申请实施例提供了一种模型控制方法、装置、设备、系统以及计算机存储介质,可以解决上述相关技术中的一些问题。
请参考图1,图1是本申请实施例所应用的一种应用场景的结构示意图,该应用场景可以包括终端11以及服务器12。终端11能够与服务器12建立有线连接或无线连接。
终端11可以包括智能手机、平板电脑、智能可穿戴设备、台式计算机、笔记本型计算机等各种终端。终端11的数量可以是多个,图1示出了终端11的数量为3个的情况,但并不对此进行限制。相较于服务器12,终端11的计算能力通常较弱。终端11中可以运行有应用程序,该应用程序可以由Flutter(一种开源的构建用户界面工具包)技术构建而成。该应用程序用于从服务器12中获取视频数据,并向服务器12发送控制信息(如控制信令等)。
服务器12可以包括一个服务器或者服务器集群,该服务器12可以具有强大的数据处理能力。该服务器12中可以具有能够运行各种模型的组件,如UE模型插件等。此外,该服务器12中还可以具有视频流生成组件,可以生成实时消息传输协议(Real Time Messaging Protocol,RTMP)或者实时流传输协议(Real Time Streaming Protocol,RTSP)格式的视频数据(视频流)。
下面对本申请实施例提供的模型控制方法的应用场景进行说明。
一种应用场景中,本申请实施例提供的模型控制方法,可以用于向用户展示包括多栋建筑物(建筑群)的三维模型,该建筑群可以为住宅楼、商业街区、办公楼群、古建筑群或者城市模型等。用户通过本申请实施例提供的方法,可以方便快捷的了解到这些建筑群的三维模型所要展示的信息,如各个角度的外观,具体细节处的图案等等。
另一种应用场景中,本申请实施例提供的模型控制方法,可以用于向用户展示包括消费品的三维模型,该消费品可以包括手机、平板电脑、智能可穿戴设备、台式计算机、笔记本型计算机、汽车、自行车、摩托车等。用户通过本申请实施例提供的方法,可以方便快捷的了解到这些消费品的三维模型所要展示的信息,如各个角度的外观,具体细节处的图案等等。
图2是本申请实施例提供的一种模型控制方法的方法流程图,该方法可以应用于图1所示实施环境的终端中,该方法可以包括下面几个步骤:
步骤201、接收并展示服务器提供的模型的视频数据。
步骤202、获取针对视频数据的控制信息。
步骤203、将控制信息发送至服务器,服务器被配置为基于控制信息对模型进行调节。
步骤204、接收并展示服务器提供的调节后的模型的视频数据。
综上所述,本申请实施例提供的模型控制方法,通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
图3是本申请实施例提供的另一种模型控制方法的方法流程图,该方法可以应用于图1所示实施环境的服务器中,该方法可以包括下面几个步骤:
步骤301、运行模型。
步骤302、获取模型的视频数据。
步骤303、向终端发送视频数据。
步骤304、接收终端提供的针对模型的控制信息。
步骤305、基于控制信息对模型进行调节。
步骤306、获取调节后的模型的视频数据。
步骤307、向终端发送调节后的模型的视频数据。
综上所述,本申请实施例提供的模型控制方法,通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
图4是本申请实施例提供的另一种模型控制方法的方法流程图,该方法可以应用于图1所示实施环境中,该方法可以包括下面几个步骤:
步骤401、服务器运行模型。
在应用本申请实施例提供的方法时,可以由服务器来运行模型,该模型可以预先设置与服务器中。示例性的,当该模型为三维模型时,服务器可以通过三维模型运行组件(如UE模型插件)运行模型。模型在运行后,可以在控制信息的控制下进行调整,以便于观察。
步骤402、服务器获取模型的视频数据。
服务器可以在运行模型后,开始采集模型的视频数据,该视频数据可以为视频流的形式。在一种示例性的实施例中,服务器中可以具有视频流生成组件,服务器可以通过该视频流生成组件生成实时消息传输协议格式的视频数据或者实时流传输协议格式的视频数据。
步骤403、服务器向终端发送视频数据。
服务器可以通过与终端之间的无线连接或有线连接,向终端发送模型的视频数据。
步骤404、终端接收并展示服务器提供的模型的视频数据。
终端在接收到服务器发送的模型的视频数据后,可以在显示界面展示模型的视频数据。
具体的,在步骤401中,终端所执行的动作可以包括:
1)接收服务器提供的模型的视频数据。
2)通过本地的播放器组件播放模型的视频数据。
终端通过本地的播放器组件播放模型的视频数据,这仅仅是一个视频播放动作,对于终端的计算能力的要求较低,终端可以流畅的播放模型的视频数据。
请参考图5,图5是本申请实施例中一种播放器的架构示意图,其中,终端的播放器可以分为上下两层,下层s2为使用能够播放RTMP/RTSP视频流的播放器,上层s1可以为交互手势捕捉层。
可以在Flutter中使用两个自定义的Widget(可以理解为组件)分别实现上述两层。实现之后,可以在Flutter的Stack组件里面放置上述两层,交互手势捕捉层为上层,播放器为下层,伪代码可以为:
Figure PCTCN2022109021-appb-000001
上述伪代码的含义为:Stack是Flutter自带的层次组件,Stack中可以放置多个子组件,先放入的子组件PlayerWidget处于下层,后放置的子组件CaptureWidget处于上层。
示例性的,终端可以通过Flutter中的视频播放组件(如PlayerWidget)来播放视频数据,并通过Flutter中的触控组件(如CaptureWidget)来获取用户的操作信息。
步骤405、终端获取针对视频数据的控制信息。
该控制信息可以用于对视频数据进行调整。
在一种示例性的实施例中,请参考图6,图6是本申请实施例提供的一种获取针对视频数据的控制信息的流程示意图,步骤405可以包括下面几个子步骤:
子步骤4051、获取操作信息。
该操作信息为用户操作终端而产生的信息,基于控制模型的不同,终端可以通过不同的方式来获取操作信息。在本申请实施例中,终端获取操作信息的方式至少包括通过鼠标获取以及通过触摸屏(或触摸板)获取。
子步骤4052、确定终端当前的控制模式。
控制模式至少包括鼠标控制模式以及触摸控制模式,每种控制模式均包括操作信息与控制信息的对应关系。
子步骤4052至少包括下面几个执行动作:
1)判断操作信息对应的位置是否属于指定区域。
该指定区域可以是指终端的显示界面中的指定区域,该指定区域可以与所显示的模型的视频数据相关,示例性的,该指定区域可以是模型在显示界面所在的区域,或者是显示面板的一个边缘区域,用户可以通过在该指定区域进行操作,以确定控制模式。示例性的,请参考图7,图7是本申请实施例中一种终端的显示界面的示意图。其中,该显示界面71中显示有模型A,该模型A为一个建筑物的三维模型,指定区域可以为该模型A在该显示界面71中所占的区域。终端可以判断用户的操作信息对应的位置是否属于该指定区域。
2)响应于操作信息对应的位置不属于指定区域,确定终端当前的控制模式为触摸控制模式。
当用户的操作信息不是针对指定区域的操作信息时,终端可以确定当前的控制模式为触摸控制模式。
也即是当用户在显示界面中除指定区域外的区域进行触摸控制(可以通过 手指或者触控笔进行触摸控制,触摸控制可以包括单击、双击等)后,终端可以确定当前的控制模式为触摸控制模式。
3)响应于操作信息对应的位置属于指定区域,确定终端当前的控制模式为鼠标控制模式。
当用户的操作信息是针对指定区域的操作信息时,终端可以确定当前的控制模式为鼠标控制模式。
也即是当用户通过鼠标在显示界面的指定区域进行鼠标控制(该鼠标控制可以包括单击或者双击左键,或者,单击或者双击右键等)后,终端可以确定当前的控制模式为鼠标控制模式。
在一种示例性的实施例中,终端可以同时通过鼠标以及触摸屏进行控制(例如终端为具有触摸屏的笔记本型计算机),则用户可以通过其中至少一种控制方式来操作终端,或者,可以选择其中一种控制方式来操作终端。
在具体实现时,可以使用Flutter框架自带的MouseRegion类,在带有鼠标的设备上,该类的onEnter会被触发,如果被触发,就表示该设备应该使用鼠标操作模式。默认情况下,按触控操作模式处理。Flutter伪代码和原理示意图可以包括:
Figure PCTCN2022109021-appb-000002
其中,CaptureWidget是手势捕捉组件,它的子组件是MouseRegion,用来检测鼠标,当输入进入MouseRegion区域(该MouseRegion区域可以为上述的指定区域)后,会触发onEnter()函数,判定为支持鼠标;MouseRegion的子组件是一个Container,可以设定Container的大小来指定整个手势捕捉区域的大小。
子步骤4053、以终端当前的控制模式包括的对应关系,将操作信息转换为控制信息。
用户通过鼠标以及触控触摸屏进行各种操作,都可以对应有不同的控制信息,且不同的控制模式中,可以有不同的对应关系。
示例性的,一种对应关系可以如表1所示:
表1
Figure PCTCN2022109021-appb-000003
表1中示出了鼠标控制模式(可以为在Windows/Mac/Linux等操作系统中实现的控制模式)的操作信息以及触摸控制模式的操作信息,各自对应的控制信息,示例性的,用户在鼠标控制模式下,单击鼠标左键的操作信息,可以与单击的控制信息对应,该单击的控制信息可以与预定的控制方式对应,如选定模型,高亮选定的模型的边缘等。
在一种示例性的实施例汇总,控制信息至少包括两种控制指令,示例性的,上述表1中的多个控制信息中,每一个控制信息即可以为一个控制指令。子步骤4053可以包括下面几个执行过程:
1)基于终端当前的控制模式对应的对应关系,判断操作信息是否与至少两种控制指令中的第一控制指令对应。
也即是终端可以通过一种循环判断的方式,依次将操作信息与每一个控制指令比对,示例性的,当终端在鼠标控制模式下接收到的操作信息为“单击鼠标左键”,则可以依据表1记载的对应关系,从上到下,首先判断操作信息“单击鼠标左键”是否与第一控制指令“单击”对应,可以看出,第一控制指令“单击”对应的操作信息为“单击鼠标左键”,则表明操作信息“拖动鼠标左键”不与第一控制指令“单击”对应。
2)响应于操作信息与第一控制指令对应,则将操作信息转换为第一控制指令。
当操作信息与第一控制指令对应,则终端可以将操作信息转换为第一控制指令。示例性的,当终端在鼠标控制模式下接收到的操作信息为“拖动鼠标左键”,则可以依据表1记载的对应关系,从上到下,首先判断操作信息“拖动鼠标左键”是否与第一控制指令“单击”对应,可以看出,第一控制指令“单击”对应的操作信息为“单击鼠标左键”,则表明操作信息“单击鼠标左键”与第一控制指令“单击”对应,则终端可以将操作信息“单击鼠标左键”转换为第一控制指令“单击”。
3)响应于操作信息与第一控制指令不对应,判断操作信息是否与至少两种控制指令中的第二控制指令对应。
类似的,当操作信息与第一控制指令不对应时,终端可以继续判断操作信息是否与至少两种控制指令中的第二控制指令对应,判断方式可以参考上述判断操作信息是否与至少两种控制指令中的第一控制指令对应的过程,本申请实施例在此不再赘述。
4)响应于操作信息与第二控制指令对应,则将操作信息转换为第二控制指令。
若控制信息包括n种控制指令,则终端可以依次将操作信息与这n种控制指令进行比对,直到比对成功截止。当然,若这n种控制指令均不与操作指令对应,则终端可以在操作界面展示提示信息,以提示用户操作信息无匹配,或者,终端可以不进行反应。下面提供一种上述表1提供的操作信息的具体采集方式,请参考表2:
表2
Figure PCTCN2022109021-appb-000004
Figure PCTCN2022109021-appb-000005
上述表中的Listener、onPointerDown、onPointerMove、scrollDelta可以均为应用于鼠标控制中的程序,GestureDector、onTap、onDoubleTap、onScaleUpdate、onLongPressStart以及onLongPressEnd可以均为应用于触摸控制的程序,具体可以参考相关技术,本申请实施例对此不进行赘述。
此外,本申请实施例还提供一种JSON格式的信令协议,请参考表3:
表3
Figure PCTCN2022109021-appb-000006
Figure PCTCN2022109021-appb-000007
Figure PCTCN2022109021-appb-000008
该信令协议用于定义终端和服务器之间传输的控制信息的格式。Flutter作为一种跨平台技术,其本身提供了一些基本的手势捕捉组件,比如GestureDector,GestureDector可检测到单击、双击等,但不够灵活,可能难以适用于对三维模型的具体控制(如变换视角以及视角的缩放等),且无法灵活的应用于不同的控制模式中。而本申请实施例中,通过重新定义各种控制信息,以便于应用于对三维模型的控制场景中,提高了本申请实施例提供的模型的控制方法的适用性。
步骤406、终端将控制信息发送至服务器。
终端在获取了用户触发的操作信息对应的控制信息后,可以通过与服务器之间的无线连接或有线连接,将控制信息发送至服务器,以便于服务器基于该控制信息来控制运行于服务器中的模型。示例性的,终端可以控制信息以JSON格式进行数据封装,并通过网络协议(比如TCP、UDP、WebSocket、MQTT、HTTP等)传输给服务器。
在一种示例性的实施例中,本申请实施例中所涉及的终端可以包括多个终端,这多个终端均可以通过步骤401至步骤406,将控制信息发送至服务器,一种可选地实施方式中,这多个终端可以各自对不同的模型进行控制,另一种可选地实施方式中,这多个终端可以对同一个模型进行控制,本申请实施例对此不进行限制。
服务器可以接收终端提供的针对服务器中所运行的模型的控制信息。在一 种示例性的实施例中,该终端至少包括第一终端以及第二终端,则服务器接收到的即为第一终端提供的针对模型的第一控制信息以及第二终端提供的针对模型的第二控制信息。服务器可以先后接收到第一控制信息以及第二控制信息,或者同时接收到第一控制信息以及第二控制信息,本申请实施例对此不进行限制。
本申请实施例中,终端中可以运行有应用程序(客户端),该应用程序采用能够跨平台的Flutter技术实现,可应用于Windows端、Android端、iOS端、PC浏览器端等各个平台中。示例性的,该应用了Flutter技术的应用程序可以展示上述图7所示的显示界面,用户可以在该应用程序的显示界面中输入对于模型的控制信息。
步骤407、服务器基于控制信息对模型进行调节。
服务器可以基于终端提供的控制信息对模型进行调节。
在服务器接收到第一终端提供的针对模型的第一控制信息以及第二终端提供的针对模型的第二控制信息,服务器可以基于第一终端提供的第一控制信息,以及第二终端提供的第二控制信息,对模型进行调节。
在一种示例性的实施例中,服务器可以基于接收到的控制信息的先后顺序,依次对模型进行调节,例如,服务器先接收到第一终端发送的第一控制信息,则服务器则可以先基于第一控制信息对模型进行调节,之后接收到了第二终端发送的第而控制信息,则服务器则继续基于第二控制信息对模型进行调节。
在另一种示例性的实施例中,由多个终端来共同对服务器中的模型进行控制的场景下,服务器可以通过步骤407至步骤409来分别对模型进行调整。
步骤408、服务器获取调节后的模型的视频数据。
类似于步骤402,服务器可以获取调节后的模型的视频数据。
步骤409、服务器向终端发送调节后的模型的视频数据。
服务器可以通过与终端之间的无线连接或有线连接,向终端发送模型的视频数据。
由多个终端来共同对服务器中的模型进行控制的场景下,服务器可以向多个终端发送调节后的模型的视频数据。
步骤410、终端接收并展示服务器提供的调节后的模型的视频数据。
终端可以在接收到服务器提供的调节后的模型的视频数据后,在显示界面展示该视频数据。如此便实现了在终端侧进行模型的展示以及模型的调节的功 能,且终端无需直接运行以及调整模型,大大降低了对于终端的机能要求,便于在终端进行复杂模型的展示和控制,提高了用户体验。
若服务器向多个终端发送调节后的模型的视频数据,则多个终端均可以看到同样的一个调节后的模型的视频数据,如此便可以实现向多个终端同时展示模型以及控制模型的功能。
综上所述,本申请实施例提供的模型控制方法,通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
请参考图8,图8是本申请实施例提供的模型控制方法的一种架构示意图,其中,服务器中包括UE模型插件、UE模型转视频流模块、推流服务器模块以及信令接收器、终端中可以包括播放器中的手势捕捉模块、以及流媒体播放器模块,信令协议转换模块以及信令发送模块。
在应用本申请实施例提供模型控制方法时,服务器中的UE模型插件可以运行模型,并由UE模型转视频流模块采集得到视频流,再由推流服务模块将视频流推送至终端,终端通过流媒体播放器播放视频流,并通过手势捕捉模块获取用户的控制信息,之后通过信令协议转换模块将控制信息转换为预定格式的信令,并由信令发送器发送至服务器,服务器中的信令接收器接收信令,并将信令发送至UE模型插件以调整模型。
请参考图9,图9是本申请实施例提供的一种模型控制装置的框图,该模型控制装置可以部分或者全部结合设置于图1所示的实施环境中的终端中,该模型控制装置900包括:
第一展示模块910,用于接收并展示服务器提供的模型的视频数据;
控制信息获取模块920,用于获取针对视频数据的控制信息;
第一发送模块930,用于将控制信息发送至服务器,服务器被配置为基于控制信息对模型进行调节;
第二展示模型940,用于接收并展示服务器提供的调节后的模型的视频数据。
综上所述,本申请实施例提供的模型控制装置,通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
下述为本公开装置实施例,可以用于执行本公开方法实施例。对于本公开装置实施例中未披露的细节,请参照本公开方法实施例。
请参考图10,图10是本申请实施例提供的另一种模型控制装置的框图,该模型控制装置可以部分或者全部结合设置于图1所示的实施环境中的服务器中,该模型控制装置1000包括:
模型运行模块1010,用于运行模型;
第一视频获取模块1020,用于获取模型的视频数据;
第二发送模块1030,用于向终端发送视频数据;
信息接收模块1040,杨红玉接收终端提供的针对模型的控制信息;
调节模块1050,用于基于控制信息对模型进行调节;
第二视频获取模块1060,用于获取调节后的模型的视频数据;
第二发送模块1070,用于向终端发送调节后的模型的视频数据。
综上所述,本申请实施例提供的模型控制装置,通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
图11是本申请实施例提供的一种模型控制系统的框图,该模型控制系统包 括:终端1110以及服务器1120。
服务器1120运行模型。
服务器1120获取所述模型的视频数据。
服务器1120向终端1110发送所述视频数据。
终端1110接收并展示服务器1120提供的模型的视频数据。
终端1110获取针对所述视频数据的控制信息。
终端1110将所述控制信息发送至所述服务器1120。
服务器1120接收所述终端1110提供的针对所述模型的控制信息。
服务器1120基于所述控制信息对所述模型进行调节。
服务器1120获取调节后的模型的视频数据。
服务器1120向所述终端1110发送所述调节后的模型的视频数据。
终端1110接收并展示所述服务器1120提供的调节后的模型的视频数据。
综上所述,本申请实施例提供的模型控制系统,通过服务器向终端提供模型的视频数据,终端将针对视频数据的控制信息发送至服务器,以由服务器来对模型进行调节,并将调节后的模型的视频数据发送至终端,由终端进行展示,如此便实现了服务器来运行模型以及调节模型,终端展示以及控制模型的控制方法,且该方法中模型由服务器运行,不会受制于终端的机能,可以解决相关技术中模型控制方法的控制效率较低的问题,实现了提高模型控制方法的控制效率的效果。
此外,本申请实施例还提供一种模型控制设备,模型控制设备包括处理器和存储器,存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上述的模型控制方法。
本申请实施例还提供一种非瞬态计算机存储介质,该非瞬态计算机存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现如上述的模型控制方法。
本申请实施例还提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执 行该计算机指令,使得该计算机设备执行上述的模型控制方法。
本申请中术语“A和B的至少一种”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和B的至少一种,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。同理,“A、B和C的至少一种”表示可以存在七种关系,可以表示:单独存在A,单独存在B,单独存在C,同时存在A和B,同时存在A和C,同时存在C和B,同时存在A、B和C这七种情况。同理,“A、B、C和D的至少一种”表示可以存在十五种关系,可以表示:单独存在A,单独存在B,单独存在C,单独存在D,同时存在A和B,同时存在A和C,同时存在A和D,同时存在C和B,同时存在D和B,同时存在C和D,同时存在A、B和C,同时存在A、B和D,同时存在A、C和D,同时存在B、C和D,同时存在A、B、C和D,这十五种情况。
在本申请中,术语“第一”、“第二”、“第三”和“第四”仅用于描述目的,而不能理解为指示或暗示相对重要性。术语“多个”指两个或两个以上,除非另有明确的限定。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或 光盘等。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (15)

  1. 一种模型控制方法,其特征在于,应用于终端中,所述方法包括:
    接收并展示服务器提供的模型的视频数据;
    获取针对所述视频数据的控制信息;
    将所述控制信息发送至所述服务器,所述服务器被配置为基于所述控制信息对所述模型进行调节;
    接收并展示所述服务器提供的调节后的模型的视频数据。
  2. 根据权利要求1所述的方法,其特征在于,所述获取针对所述视频数据的控制信息,包括:
    获取操作信息;
    确定所述终端当前的控制模式,所述控制模式至少包括鼠标控制模式以及触摸控制模式,每种所述控制模式均包括操作信息与控制信息的对应关系;
    以所述终端当前的控制模式包括的对应关系,将所述操作信息转换为所述控制信息。
  3. 根据权利要求2所述的方法,其特征在于,所述控制信息至少包括两种控制指令,
    所述以所述终端当前的控制模式对应的转换关系,将所述操作信息转换为所述控制信息,包括:
    基于所述终端当前的控制模式对应的对应关系,判断所述操作信息是否与所述至少两种控制指令中的第一控制指令对应;
    响应于所述操作信息与所述第一控制指令对应,则将所述操作信息转换为所述第一控制指令;
    响应于所述操作信息与所述第一控制指令不对应,判断所述操作信息是否与所述至少两种控制指令中的第二控制指令对应;
    响应于所述操作信息与所述第二控制指令对应,则将所述操作信息转换为所述第二控制指令。
  4. 根据权利要求3所述的方法,其特征在于,所述控制信息至少包括单击、 双击、视角平移、视角缩放以及视角改变中的至少两种控制指令。
  5. 根据权利要求2所述的方法,其特征在于,所述确定所述终端当前的控制模式,包括:
    判断所述操作信息对应的位置是否属于指定区域;
    响应于所述操作信息对应的位置不属于所述指定区域,确定所述终端当前的控制模式为所述触摸控制模式;
    响应于所述操作信息对应的位置属于所述指定区域,确定所述终端当前的控制模式为所述鼠标控制模式。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述接收并展示服务器提供的模型的视频数据,包括:
    接收所述服务器提供的所述模型的视频数据;
    通过本地的播放器组件播放所述模型的视频数据。
  7. 一种模型控制方法,其特征在于,应用于服务器中,所述方法包括:
    运行模型;
    获取所述模型的视频数据;
    向终端发送所述视频数据;
    接收所述终端提供的针对所述模型的控制信息;
    基于所述控制信息对所述模型进行调节;
    获取调节后的模型的视频数据;
    向所述终端发送所述调节后的模型的视频数据。
  8. 根据权利要求7所述的方法,其特征在于,所述模型为三维模型,所述运行模型,包括:
    通过三维模型运行组件运行所述模型。
  9. 根据权利要求7所述的方法,其特征在于,所述终端至少包括第一终端以及第二终端,
    所述基于所述控制信息对所述模型进行调节,包括:
    基于所述第一终端提供的第一控制信息,以及所述第二终端提供的第二控制信息,对所述模型进行调节。
  10. 根据权利要求9所述的方法,其特征在于,所述第一终端和所述第二终端为不同操作系统的终端。
  11. 一种模型控制装置,其特征在于,所述模型控制装置包括:
    第一展示模块,用于接收并展示服务器提供的模型的视频数据;
    控制信息获取模块,用于获取针对所述视频数据的控制信息;
    第一发送模块,用于将所述控制信息发送至所述服务器,所述服务器被配置为基于所述控制信息对所述模型进行调节;
    第二展示模型,用于接收并展示所述服务器提供的调节后的模型的视频数据。
  12. 一种模型控制装置,其特征在于,所述模型控制装置包括:
    模型运行模块,用于运行模型;
    第一视频获取模块,用于获取所述模型的视频数据;
    第二发送模块,用于向终端发送所述视频数据;
    信息接收模块,杨红玉接收所述终端提供的针对所述模型的控制信息;
    调节模块,用于基于所述控制信息对所述模型进行调节;
    第二视频获取模块,用于获取调节后的模型的视频数据;
    第二发送模块,用于向所述终端发送所述调节后的模型的视频数据。
  13. 一种模型控制系统,其特征在于,所述模型控制系统包括:终端以及服务器;
    服务器运行模型;
    服务器获取所述模型的视频数据;
    服务器向终端发送所述视频数据;
    终端接收并展示服务器提供的模型的视频数据;
    终端获取针对所述视频数据的控制信息;
    终端将所述控制信息发送至所述服务器;
    服务器接收所述终端提供的针对所述模型的控制信息;
    服务器基于所述控制信息对所述模型进行调节;
    服务器获取调节后的模型的视频数据;
    服务器向所述终端发送所述调节后的模型的视频数据;
    终端接收并展示所述服务器提供的调节后的模型的视频数据。
  14. 一种模型控制设备,其特征在于,所述模型控制设备包括处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至6任一所述的模型控制设备,或者,如权利要求7至10任一所述的模型控制方法。
  15. 一种非瞬态计算机存储介质,其特征在于,所述非瞬态计算机存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至6任一所述的模型控制设备,或者,如权利要求7至10任一所述的模型控制方法。
PCT/CN2022/109021 2022-07-29 2022-07-29 模型控制方法、装置、设备、系统以及计算机存储介质 WO2024021036A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/109021 WO2024021036A1 (zh) 2022-07-29 2022-07-29 模型控制方法、装置、设备、系统以及计算机存储介质
CN202280002449.4A CN117813579A (zh) 2022-07-29 2022-07-29 模型控制方法、装置、设备、系统以及计算机存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/109021 WO2024021036A1 (zh) 2022-07-29 2022-07-29 模型控制方法、装置、设备、系统以及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2024021036A1 true WO2024021036A1 (zh) 2024-02-01

Family

ID=89705043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/109021 WO2024021036A1 (zh) 2022-07-29 2022-07-29 模型控制方法、装置、设备、系统以及计算机存储介质

Country Status (2)

Country Link
CN (1) CN117813579A (zh)
WO (1) WO2024021036A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400543A (zh) * 2013-07-18 2013-11-20 贵州宝森科技有限公司 3d互动展示系统及其展示方法
US20140368499A1 (en) * 2013-06-15 2014-12-18 Rajdeep Kaur Virtual Fitting Room
JP3212833U (ja) * 2017-04-12 2017-10-05 麦奇教育集団有限公司Tutor Group Limited 対話型教育支援システム
US20200259931A1 (en) * 2017-12-29 2020-08-13 Tencent Technology (Shenzhen) Company Limited Multimedia information sharing method, related apparatus, and system
US20200302693A1 (en) * 2019-03-19 2020-09-24 Obsess, Inc. Generating and presenting a 3d virtual shopping environment
WO2021169431A1 (zh) * 2020-02-27 2021-09-02 北京市商汤科技开发有限公司 交互方法、装置、电子设备以及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368499A1 (en) * 2013-06-15 2014-12-18 Rajdeep Kaur Virtual Fitting Room
CN103400543A (zh) * 2013-07-18 2013-11-20 贵州宝森科技有限公司 3d互动展示系统及其展示方法
JP3212833U (ja) * 2017-04-12 2017-10-05 麦奇教育集団有限公司Tutor Group Limited 対話型教育支援システム
US20200259931A1 (en) * 2017-12-29 2020-08-13 Tencent Technology (Shenzhen) Company Limited Multimedia information sharing method, related apparatus, and system
US20200302693A1 (en) * 2019-03-19 2020-09-24 Obsess, Inc. Generating and presenting a 3d virtual shopping environment
WO2021169431A1 (zh) * 2020-02-27 2021-09-02 北京市商汤科技开发有限公司 交互方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN117813579A (zh) 2024-04-02

Similar Documents

Publication Publication Date Title
CN111741372B (zh) 一种视频通话的投屏方法、显示设备及终端设备
EP2930937A1 (en) Method, apparatus, and system for transferring digital media content playback
US9569159B2 (en) Apparatus, systems and methods for presenting displayed image information of a mobile media device on a large display and control of the mobile media device therefrom
WO2019114185A1 (zh) 一种app远程控制方法及相关设备
WO2021109745A1 (zh) 一种远程协助的方法、电子设备和系统
US10684696B2 (en) Mechanism to enhance user experience of mobile devices through complex inputs from external displays
JP2024505995A (ja) 特殊効果展示方法、装置、機器および媒体
JP2023503679A (ja) マルチウィンドウ表示方法、電子デバイス及びシステム
US20240089529A1 (en) Content collaboration method and electronic device
CN107870754A (zh) 一种控制设备上展示的内容的方法及装置
WO2013123720A1 (zh) 一种控制鼠标模块的方法及电子设备
US20230333803A1 (en) Enhanced Screen Sharing Method and System, and Electronic Device
WO2023011058A1 (zh) 显示设备、通信终端及投屏画面动态显示方法
WO2023155529A1 (zh) 显示设备、智能家居系统及用于显示设备的多屏控制方法
CN114286152A (zh) 显示设备、通信终端及投屏画面动态显示方法
WO2020248697A1 (zh) 显示设备及视频通讯数据处理方法
US20230370686A1 (en) Information display method and apparatus, and device and medium
Ha et al. N-screen service using I/O virtualization technology
WO2024051540A1 (zh) 特效处理方法、装置、电子设备及存储介质
TW201508605A (zh) 顯示網路資訊介面的終端、系統及介面的生成方法
WO2024021036A1 (zh) 模型控制方法、装置、设备、系统以及计算机存储介质
CN108900794B (zh) 用于远程会议的方法和装置
WO2021012128A1 (zh) 基于移动终端的图像显示装置、方法、介质和电子设备
CN115250357B (zh) 终端设备、视频处理方法和电子设备
CN111356009B (zh) 音频数据的处理方法、装置、存储介质以及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22952483

Country of ref document: EP

Kind code of ref document: A1