CN114945088A - Three-dimensional model generation method and device, shooting terminal and terminal equipment - Google Patents

Three-dimensional model generation method and device, shooting terminal and terminal equipment Download PDF

Info

Publication number
CN114945088A
CN114945088A CN202210517010.7A CN202210517010A CN114945088A CN 114945088 A CN114945088 A CN 114945088A CN 202210517010 A CN202210517010 A CN 202210517010A CN 114945088 A CN114945088 A CN 114945088A
Authority
CN
China
Prior art keywords
shooting
shooting operation
information
dimensional model
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210517010.7A
Other languages
Chinese (zh)
Inventor
苏安东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202210517010.7A priority Critical patent/CN114945088A/en
Publication of CN114945088A publication Critical patent/CN114945088A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a three-dimensional model generation method and device, a shooting terminal and terminal equipment. One embodiment of the method comprises: starting to collect movement information in response to the detection of the position movement of the shooting terminal; in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information; and sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation. According to the embodiment, the complexity of user operation is reduced, and the accuracy of the generated three-dimensional model of the shooting scene is improved.

Description

Three-dimensional model generation method and device, shooting terminal and terminal equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a three-dimensional model generation method and device, a shooting terminal and terminal equipment.
Background
The existing shooting terminal utilizes a depth camera to collect spatial depth data, and the functions of point cloud generation and three-dimensional model reconstruction are realized through an algorithm. In the process of shooting single-point or multi-point data, a user needs to manually rotate the model, confirm the position and the house direction, and finally carry out splicing operation on the shot images. In this process, both the rotation and movement operations increase the complexity and difficulty of the operation.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a three-dimensional model generation method and device, a shooting terminal and a terminal device, which reduce the complexity of user operation and improve the accuracy of a generated three-dimensional model of a shooting scene.
In a first aspect, an embodiment of the present disclosure provides a three-dimensional model generation method, which is applied to a shooting terminal, and the method includes: starting to collect movement information in response to the detection of the position movement of the shooting terminal; in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information; and sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation.
In a second aspect, an embodiment of the present disclosure provides a three-dimensional model generation method, applied to a terminal device, where the method includes: receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation; determining position information of a shooting position corresponding to at least one shooting operation by using a moving track corresponding to the at least one shooting operation; and generating a three-dimensional model of the shooting scene based on the images acquired by the at least one shooting operation and the corresponding position information.
In a third aspect, an embodiment of the present disclosure provides a three-dimensional model generating device, which is disposed in a shooting terminal, and includes: the acquisition unit is used for responding to the detection of the position movement of the shooting terminal and starting to acquire movement information; the determining unit is used for responding to the detected shooting operation, finishing the collection of the movement information and determining a movement track corresponding to the current shooting operation by using the collected movement information; the sending unit is used for sending target information corresponding to the current shooting operation to the target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information corresponding to at least one shooting operation respectively, wherein the target information comprises an image collected by the shooting operation and a moving track corresponding to the shooting operation.
In a fourth aspect, an embodiment of the present disclosure provides a three-dimensional model generating apparatus, which is disposed in a terminal device, and includes: the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, and the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation; the determining unit is used for determining the position information of the shooting position corresponding to the at least one shooting operation by using the moving track corresponding to the at least one shooting operation; and the generating unit is used for generating a three-dimensional model of the shooting scene based on the image acquired by at least one shooting operation and the corresponding position information.
In a fifth aspect, an embodiment of the present disclosure provides a shooting device, including: one or more processors; storage means for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement the three-dimensional model generation method according to the first aspect.
In a sixth aspect, an embodiment of the present disclosure provides a terminal device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the three-dimensional model generation method according to the second aspect.
In a seventh aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the three-dimensional model generation method according to the first or second aspect.
According to the three-dimensional model generation method and device, the shooting terminal and the terminal device, the movement information starts to be acquired by responding to the detected position movement of the shooting terminal; in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information; and then, sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a movement track corresponding to the shooting operation. In the mode, the shot position information corresponding to the shooting operation is determined, and the shot images can be associated with the shooting position, so that the shot images are spliced to obtain the three-dimensional model of the shooting scene, the complexity of user operation is reduced, and the accuracy of the generated three-dimensional model of the shooting scene is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is an exemplary system architecture diagram in which various embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a three-dimensional model generation method according to the present disclosure;
FIG. 3 is a flow diagram of yet another embodiment of a three-dimensional model generation method according to the present disclosure;
FIG. 4 is a schematic structural diagram of one embodiment of a three-dimensional model generation apparatus according to the present disclosure;
FIG. 5 is a schematic structural diagram of yet another embodiment of a three-dimensional model generation apparatus according to the present disclosure;
FIG. 6 is a schematic block diagram of a computer system suitable for use with an electronic device implementing an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the three-dimensional model generation methods of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include photographing devices 1011, 1012, a network 102, and terminal devices 1031, 1032. The network 102 serves as a medium for providing a communication link between the photographing devices 1011, 1012 and the terminal devices 1031, 1032. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The capture devices 1011, 1012 may begin collecting movement information in response to detecting a position movement; in response to the detection of the shooting operation, the collection of the movement information can be finished, and a movement track corresponding to the current shooting operation is determined by using the collected movement information; after that, the target information corresponding to the current shooting operation may be sent to the terminal devices 1031, 1032, so that the terminal devices 1031, 1032 generate a three-dimensional model of the shooting scene based on the target information respectively corresponding to at least one shooting operation.
The photographing devices 1011 and 1012 may be hardware or software. When the photographing devices 1011, 1012 are hardware, they may be various electronic devices having a camera and an inertial navigation unit and supporting information interaction, including but not limited to a camera (e.g., VR camera) smart phone, a tablet computer, and the like. When the photographing devices 1011, 1012 are software, they may be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The terminal devices 1031, 1032 receive images and moving tracks respectively corresponding to at least one shooting operation sent by the shooting devices 1011, 1012; then, the position information of the shooting position corresponding to the at least one shooting operation can be determined by using the moving track corresponding to the at least one shooting operation; then, a three-dimensional model of the shooting scene may be generated based on the images acquired by the at least one shooting operation and the corresponding position information.
Various communication client applications, such as a three-dimensional model building application, a house finding application, an instant messaging software, and the like, may be installed on the terminal devices 1031, 1032.
The terminal devices 1031, 1032 may be hardware or software. When the terminal devices 1031, 1032 are hardware, they may be various electronic devices having display screens and supporting information interaction, including but not limited to smart phones, tablet computers, laptop computers, and the like. When the terminal devices 1031, 1032 are software, they may be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
It should be further noted that the three-dimensional model generation method provided by the embodiment of the present disclosure may be executed by the photographing devices 1011 and 1012, and then the three-dimensional model generation apparatus may be disposed in the photographing devices 1011 and 1012. The three-dimensional model generation method provided by the embodiment of the present disclosure may also be executed by the terminal devices 1031, 1032, and the three-dimensional model generation apparatus may also be disposed in the terminal devices 1031, 1032.
It should be noted that, if the capturing devices 1011 and 1012 are cameras, in this case, the system architecture 100 may further include a server (not shown in the figure), and the capturing devices 1011 and 1012 need to perform information interaction with the terminal devices 1031 and 1032 via the server.
It should be understood that the number of photographing devices, networks, and terminal devices in fig. 1 is merely illustrative. There may be any number of photographing devices, networks, and terminal devices, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a three-dimensional model generation method according to the present disclosure is shown. The three-dimensional model generation method can be applied to a shooting terminal, and the flow 200 of the three-dimensional model generation method comprises the following steps:
step 201, in response to the detection of the position movement of the shooting terminal, starting to collect movement information.
In the present embodiment, an executing subject (e.g., a photographing terminal shown in fig. 1) of the three-dimensional model generation method may determine whether or not a positional movement of a photographing apparatus (i.e., the executing subject itself described above) is detected. When a three-dimensional model of a scene is constructed, because an image shot at one shooting position can only construct a three-dimensional model in a limited range, if a scene to be shot is large, the scene image needs to be shot at a plurality of shooting positions to construct the shooting scene. Therefore, the user needs to move from one photographing position to another to realize photographing of the image.
If the position movement of the shooting terminal is detected, the execution main body can start to collect movement information. The movement information may include, but is not limited to: direction of movement, speed of movement, and distance of movement.
Step 202, in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information.
In the present embodiment, the execution subject described above may determine whether a shooting operation is detected. Here, when the user photographs a scene, an approximate position at which the photographing operation is performed may be planned based on his own experience, and when moving to the photographing position, the user may photograph by standing on the photographing position and rotating one circle.
If the shooting operation is detected, the execution subject may end collecting the movement information. Then, the executing body may determine a moving track corresponding to the current shooting operation by using the collected moving information (e.g., moving direction, moving speed, and moving distance). The mobile information collected in the period is the mobile information collected in the period that the shooting terminal moves from one shooting position to the next shooting position, and the mobile information can be used for determining the moving track of the shooting terminal moving from one shooting position to the next shooting position.
It should be noted that, predicting the movement track by using the movement information such as the movement direction, the movement speed, the movement distance, and the like is a conventional technical means in the art, and is not described herein again.
Step 203, sending the target information corresponding to the current shooting operation to the target terminal device, so that the target terminal device generates a three-dimensional model of the shooting scene based on the target information corresponding to at least one shooting operation respectively.
In this embodiment, the executing body may send target information corresponding to the current shooting operation to the target terminal device. The target information may include an image acquired by the photographing operation and a movement track corresponding to the photographing operation. The target terminal device is generally a terminal device that determines a shooting position by using the target information, thereby constructing and displaying a three-dimensional model of a shooting scene.
The target terminal device may generate a three-dimensional model of a shooting scene based on target information respectively corresponding to at least one shooting operation after receiving the target information. Here, the target terminal device may determine the position information of the shooting position corresponding to the current shooting operation by using the movement trajectory corresponding to the current shooting operation. Specifically, the target terminal device may determine an end position of a movement trajectory corresponding to a current shooting operation as a shooting position of the shooting terminal. Then, the image acquired by the current shooting operation and the image acquired by the previous shooting operation can be spliced by using the shooting position corresponding to the current shooting operation and the shooting position corresponding to the previous shooting operation, so that a three-dimensional model of the shooting scene is obtained.
Then, if the execution subject detects the position movement again, the execution subject can continue to execute the steps 201 to 203 until the shooting is completed.
The method provided by the embodiment of the disclosure starts to collect movement information by responding to the detection of the position movement of the shooting terminal; in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information; and then, sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a movement track corresponding to the shooting operation. In the mode, the shot position information corresponding to the shooting operation is determined, and the shot images can be associated with the shooting position, so that the shot images are spliced to obtain the three-dimensional model of the shooting scene, the complexity of user operation is reduced, and the accuracy of the generated three-dimensional model of the shooting scene is improved.
In some alternative implementations, the movement information may include acceleration information and angle information. The shooting terminal may be provided with an Inertial navigation Unit (IMU). The execution body may acquire acceleration information and angle information using the inertial navigation unit. Since the inertial navigation unit is provided with an accelerometer and an angular velocity meter (gyroscope), acceleration information and angle information can be collected. Before starting shooting, it is generally necessary to initialize the inertial navigation unit in the execution body.
In some optional implementations, the executing body may obtain shooting direction information corresponding to the shooting operation. The shooting terminal can be provided with an electronic compass chip. The electronic compass can also be called as a compass, the direction of the north pole can be determined by utilizing the geomagnetic field, and the positive north and south directions can be obtained through the electronic compass chip. In the shooting process, the execution main body can acquire electronic compass data by using an electronic compass chip, namely, the due north direction and the orientation of the camera of the shooting terminal can be acquired. Then, the shooting direction corresponding to the shooting operation can be predicted by using the electronic compass data. Specifically, the shooting direction of the camera can be determined by the angle between the orientation of the camera and the due north direction. Before starting shooting, it is generally necessary to initialize the electronic compass chip in the execution body.
In some optional implementation manners, if the execution main body acquires shooting direction information corresponding to a shooting operation, the target information sent by the execution main body to the target terminal device may further include shooting direction information corresponding to the shooting operation.
With further reference to FIG. 3, a flow 300 of yet another embodiment of a three-dimensional model generation method is illustrated. The three-dimensional model generation method can be applied to terminal equipment. The process 300 of the three-dimensional model generation method includes the following steps:
step 301, receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal.
In this embodiment, an executing subject (for example, a terminal device shown in fig. 1) of the three-dimensional model generation method may receive target information respectively corresponding to at least one shooting operation transmitted by a shooting terminal. The target information may include an image captured by the photographing operation and a movement trajectory corresponding to the photographing operation.
Here, the photographing apparatus may start collecting movement information when detecting a position movement. Then, if the shooting operation is detected, the shooting device may end to collect the movement information, and determine a movement track corresponding to the current shooting operation by using the collected movement information. Then, the acquired image and the moving trajectory corresponding to the current photographing operation may be transmitted to the execution main body.
If the shooting of the shooting terminal at the current shooting position is finished, the target information can be sent to the execution main body until the shooting is finished.
And step 302, determining position information of a shooting position corresponding to at least one shooting operation by using the movement track corresponding to the at least one shooting operation.
In this embodiment, the executing body may determine the position information of the shooting position corresponding to the at least one shooting operation by using the movement track corresponding to the at least one shooting operation. Specifically, for each of the at least one photographing operation, the execution body may determine an end position of a movement trajectory corresponding to the photographing operation as a photographing position of the photographing terminal.
Step 303, generating a three-dimensional model of the shooting scene based on the images acquired by the at least one shooting operation and the corresponding position information.
In this embodiment, the executing body may generate a three-dimensional model of the shooting scene based on the image acquired by the at least one shooting operation and the corresponding position information. Specifically, for each shooting operation in the at least one shooting operation, the executing body may use a position point indicated by the position information corresponding to the shooting operation and a position point indicated by the position information corresponding to a shooting operation that is previous to the shooting operation to splice an image acquired by the shooting operation and an image acquired by the shooting operation that is previous to the shooting operation, so as to obtain a three-dimensional model of a shooting scene. Therefore, the image acquired by the second shooting operation can be spliced with the image acquired by the first shooting operation, the image acquired by the third shooting operation is spliced with the spliced image corresponding to the previous two shooting operations, and so on until all the images acquired by the shooting operations are spliced, so that the three-dimensional model of the shooting scene is obtained.
The method provided by the above embodiment of the present disclosure receives the collected image and the movement track respectively corresponding to at least one shooting operation sent by the shooting terminal; then, determining the position information of the shooting position corresponding to at least one shooting operation by using the moving track corresponding to the at least one shooting operation; then, a three-dimensional model of the shooting scene is generated based on the images acquired by the at least one shooting operation and the corresponding position information. In the mode, the position information of the shooting position corresponding to the shooting operation is determined, and the shot images can be associated with the shooting position, so that the shot images are spliced to obtain the three-dimensional model of the shooting scene, the complexity of the user operation is reduced, and the accuracy of the generated three-dimensional model of the shooting scene is improved.
In some optional implementation manners, the target information may further include shooting direction information corresponding to a shooting operation. The shooting terminal can be provided with an electronic compass chip. The electronic compass can also be called as a compass, the direction of the north pole can be determined by utilizing the geomagnetic field, and the positive north and south directions can be obtained through the electronic compass chip. In the shooting process, the shooting terminal can utilize the electronic compass chip to acquire electronic compass data, namely the orientation of the north direction and the camera of the shooting terminal can be acquired. Then, the current shooting direction can be predicted by using the electronic compass data. Specifically, the current shooting direction of the camera can be determined by the included angle between the orientation of the camera and the due north direction.
In some optional implementations, after the executing body constructs the three-dimensional model of the shooting scene, the executing body may display the three-dimensional model of the shooting scene. In addition, the executing body may identify a shooting position corresponding to each shooting operation on the displayed three-dimensional model.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a three-dimensional model generation apparatus, which corresponds to the method embodiment shown in fig. 2, and which may be specifically applied in a shooting terminal.
As shown in fig. 4, the three-dimensional model generation apparatus 400 of the present embodiment includes: an acquisition unit 401, a determination unit 402 and a transmission unit 403. The acquisition unit 401 is configured to start acquiring movement information in response to detecting the position movement of the shooting terminal; the determining unit 402 is configured to end acquiring the movement information in response to detecting the shooting operation, and determine a movement trajectory corresponding to the current shooting operation by using the acquired movement information; the sending unit 403 is configured to send target information corresponding to a current shooting operation to a target terminal device, so that the target terminal device generates a three-dimensional model of a shooting scene based on target information corresponding to at least one shooting operation, where the target information includes an image acquired by the shooting operation and a movement track corresponding to the shooting operation.
In this embodiment, the specific processing of the acquisition unit 401, the determination unit 402 and the sending unit 403 of the three-dimensional model generation apparatus 400 may refer to step 201, step 202 and step 203 in the corresponding embodiment of fig. 2.
In some alternative implementations, the movement information may include acceleration information and angle information.
In some optional implementations, the three-dimensional model generation apparatus 400 may further include an acquisition unit (not shown in the figure). The acquisition unit may acquire shooting direction information corresponding to a shooting operation.
In some optional implementations, the target information may further include shooting direction information corresponding to the shooting operation.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides another embodiment of a three-dimensional model generation apparatus, which corresponds to the embodiment of the method shown in fig. 3, and which can be applied to various terminal devices.
As shown in fig. 5, the three-dimensional model generation apparatus 500 of the present embodiment includes: a receiving unit 501, a determining unit 502 and a generating unit 503. The receiving unit 501 is configured to receive target information respectively corresponding to at least one shooting operation sent by a shooting terminal, where the target information includes an image collected by the shooting operation and a moving track corresponding to the shooting operation; the determining unit 502 is configured to determine position information of a shooting position corresponding to at least one shooting operation by using a movement trajectory corresponding to the at least one shooting operation; the generating unit 503 is configured to generate a three-dimensional model of the shooting scene based on the images acquired by the at least one shooting operation and the corresponding position information.
In the present embodiment, specific processing of the receiving unit 501, the determining unit 502, and the generating unit 503 of the three-dimensional model generating apparatus 500 may refer to step 301, step 302, and step 303 in the corresponding embodiment of fig. 3.
In some optional implementations, the target information may further include shooting direction information corresponding to a shooting operation.
In some optional implementations, the three-dimensional model generation apparatus 500 may further include a presentation unit (not shown in the figures). The display unit may display the three-dimensional model of the shooting scene.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., the capture terminal or terminal device of fig. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The photographing terminal in the embodiment of the present disclosure may include, but is not limited to, a photographing terminal such as a camera (e.g., VR camera), a mobile phone, a PAD (tablet computer), and the like. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the photographing terminal; or may exist separately without being assembled into the photographing terminal. The computer readable medium carries one or more programs which, when executed by the capture terminal, cause the capture terminal to: starting to collect movement information in response to the detection of the position movement of the shooting terminal; in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information; and sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation.
The computer-readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer-readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation; determining position information of a shooting position corresponding to at least one shooting operation by using a moving track corresponding to the at least one shooting operation; and generating a three-dimensional model of the shooting scene based on the images acquired by the at least one shooting operation and the corresponding position information.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
According to one or more embodiments of the present disclosure, there is provided a three-dimensional model generation method applied to a photographing terminal, the method including: starting to collect movement information in response to the detection of the position movement of the shooting terminal; in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information; and sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation.
According to one or more embodiments of the present disclosure, the movement information includes acceleration information and angle information.
According to one or more embodiments of the present disclosure, the method includes: and acquiring shooting direction information corresponding to the shooting operation.
According to one or more embodiments of the present disclosure, the target information further includes shooting direction information corresponding to the shooting operation.
According to one or more embodiments of the present disclosure, there is provided a three-dimensional model generation method applied to a terminal device, the method including: receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation; determining position information of a shooting position corresponding to at least one shooting operation by using a moving track corresponding to the at least one shooting operation; and generating a three-dimensional model of the shooting scene based on the images acquired by the at least one shooting operation and the corresponding position information.
According to one or more embodiments of the present disclosure, the target information further includes shooting direction information corresponding to a shooting operation.
In accordance with one or more embodiments of the present disclosure, the method further comprises: and displaying the three-dimensional model of the shooting scene.
According to one or more embodiments of the present disclosure, there is provided a three-dimensional model generation apparatus provided in a photographing terminal, the apparatus including: the acquisition unit is used for responding to the detection of the position movement of the shooting terminal and starting to acquire movement information; the determining unit is used for responding to the detected shooting operation, finishing the collection of the movement information and determining a movement track corresponding to the current shooting operation by using the collected movement information; the device comprises a sending unit, a processing unit and a display unit, wherein the sending unit is used for sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, and the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation.
According to one or more embodiments of the present disclosure, the movement information includes acceleration information and angle information.
According to one or more embodiments of the present disclosure, the apparatus includes: and acquiring shooting direction information corresponding to the shooting operation.
According to one or more embodiments of the present disclosure, the target information further includes shooting direction information corresponding to the shooting operation.
According to one or more embodiments of the present disclosure, there is provided a three-dimensional model generation apparatus provided in a terminal device, the apparatus including: the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, and the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation; the determining unit is used for determining the position information of the shooting position corresponding to the at least one shooting operation by using the moving track corresponding to the at least one shooting operation; and the generating unit is used for generating a three-dimensional model of the shooting scene based on the image acquired by at least one shooting operation and the corresponding position information. According to one or more embodiments of the present disclosure, the target information further includes shooting direction information corresponding to a shooting operation.
According to one or more embodiments of the present disclosure, the apparatus further comprises: and displaying the unit. The display unit is used for displaying the three-dimensional model of the shooting scene.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprising an acquisition unit, a determination unit, and a transmission unit, can also be described as: a processor includes a receiving unit, a determining unit, and a generating unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as "start acquiring movement information in response to detection of a positional movement of the photographing terminal".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A three-dimensional model generation method is applied to a shooting terminal and is characterized by comprising the following steps:
starting to collect movement information in response to the detection of the position movement of the shooting terminal;
in response to the detection of the shooting operation, ending the collection of the movement information, and determining a movement track corresponding to the current shooting operation by using the collected movement information;
and sending target information corresponding to the current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation.
2. The method of claim 1, wherein the movement information comprises acceleration information and angle information.
3. The method according to claim 1, characterized in that it comprises:
and acquiring shooting direction information corresponding to the shooting operation.
4. The method according to claim 3, wherein the target information further includes shooting direction information corresponding to the shooting operation.
5. A three-dimensional model generation method is applied to terminal equipment and is characterized by comprising the following steps:
receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, wherein the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation;
determining position information of a shooting position corresponding to at least one shooting operation by using a moving track corresponding to the at least one shooting operation;
and generating a three-dimensional model of the shooting scene based on the images acquired by the at least one shooting operation and the corresponding position information.
6. The method according to claim 5, wherein the target information further includes shooting direction information corresponding to a shooting operation.
7. The method of claim 5 or 6, further comprising:
and displaying the three-dimensional model of the shooting scene.
8. A three-dimensional model generation device is arranged on a shooting terminal, and is characterized by comprising:
the acquisition unit is used for responding to the detection of the position movement of the shooting terminal and starting to acquire movement information;
the determining unit is used for responding to the detection of the shooting operation, finishing the collection of the movement information and determining a movement track corresponding to the current shooting operation by utilizing the collected movement information;
the device comprises a sending unit, a receiving unit and a processing unit, wherein the sending unit is used for sending target information corresponding to current shooting operation to target terminal equipment so that the target terminal equipment can generate a three-dimensional model of a shooting scene based on the target information respectively corresponding to at least one shooting operation, and the target information comprises images acquired by the shooting operation and a moving track corresponding to the shooting operation.
9. A three-dimensional model generation device arranged on a terminal device is characterized by comprising:
the device comprises a receiving unit, a processing unit and a display unit, wherein the receiving unit is used for receiving target information respectively corresponding to at least one shooting operation sent by a shooting terminal, and the target information comprises an image acquired by the shooting operation and a moving track corresponding to the shooting operation;
the determining unit is used for determining the position information of the shooting position corresponding to the at least one shooting operation by using the moving track corresponding to the at least one shooting operation;
and the generating unit is used for generating a three-dimensional model of the shooting scene based on the image acquired by at least one shooting operation and the corresponding position information.
10. A photographing apparatus, characterized by comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-4.
11. A terminal device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 5-7.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-4, 5-7.
CN202210517010.7A 2022-05-11 2022-05-11 Three-dimensional model generation method and device, shooting terminal and terminal equipment Pending CN114945088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210517010.7A CN114945088A (en) 2022-05-11 2022-05-11 Three-dimensional model generation method and device, shooting terminal and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210517010.7A CN114945088A (en) 2022-05-11 2022-05-11 Three-dimensional model generation method and device, shooting terminal and terminal equipment

Publications (1)

Publication Number Publication Date
CN114945088A true CN114945088A (en) 2022-08-26

Family

ID=82906418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210517010.7A Pending CN114945088A (en) 2022-05-11 2022-05-11 Three-dimensional model generation method and device, shooting terminal and terminal equipment

Country Status (1)

Country Link
CN (1) CN114945088A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012073A (en) * 2016-10-28 2018-05-08 努比亚技术有限公司 A kind of method and device for realizing pan-shot
CN108702447A (en) * 2017-09-29 2018-10-23 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, equipment, unmanned plane and system
CN109660723A (en) * 2018-12-18 2019-04-19 维沃移动通信有限公司 A kind of panorama shooting method and device
CN110505463A (en) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 Based on the real-time automatic 3D modeling method taken pictures
EP3651448A1 (en) * 2018-11-07 2020-05-13 Nokia Technologies Oy Panoramas
EP3945718A2 (en) * 2020-07-31 2022-02-02 Beijing Xiaomi Mobile Software Co., Ltd. Control method and apparatus, electronic device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012073A (en) * 2016-10-28 2018-05-08 努比亚技术有限公司 A kind of method and device for realizing pan-shot
CN108702447A (en) * 2017-09-29 2018-10-23 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, equipment, unmanned plane and system
EP3651448A1 (en) * 2018-11-07 2020-05-13 Nokia Technologies Oy Panoramas
CN109660723A (en) * 2018-12-18 2019-04-19 维沃移动通信有限公司 A kind of panorama shooting method and device
CN110505463A (en) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 Based on the real-time automatic 3D modeling method taken pictures
EP3945718A2 (en) * 2020-07-31 2022-02-02 Beijing Xiaomi Mobile Software Co., Ltd. Control method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN112488783B (en) Image acquisition method and device and electronic equipment
CN111292420B (en) Method and device for constructing map
CN112907652B (en) Camera pose acquisition method, video processing method, display device, and storage medium
CN114554092A (en) Equipment control method and device and electronic equipment
JP2022531186A (en) Information processing methods, devices, electronic devices, storage media and programs
CN114416259A (en) Method, device, equipment and storage medium for acquiring virtual resources
WO2023151558A1 (en) Method and apparatus for displaying images, and electronic device
CN112401752A (en) Method, device, medium and electronic equipment for detecting unknown obstacles
CN111710046A (en) Interaction method and device and electronic equipment
KR101525936B1 (en) Method and Apparatus for Providing Augmented Reality Service Based on SNS
WO2022110777A1 (en) Positioning method and apparatus, electronic device, storage medium, computer program product, and computer program
CN114194056B (en) Vehicle charging method and device and electronic equipment
CN114945088A (en) Three-dimensional model generation method and device, shooting terminal and terminal equipment
CN111586295B (en) Image generation method and device and electronic equipment
CN112432636B (en) Positioning method and device, electronic equipment and storage medium
CN114332224A (en) Method, device and equipment for generating 3D target detection sample and storage medium
CN116152075A (en) Illumination estimation method, device and system
CN112070903A (en) Virtual object display method and device, electronic equipment and computer storage medium
CN112461245A (en) Data processing method and device, electronic equipment and storage medium
US20230418072A1 (en) Positioning method, apparatus, electronic device, head-mounted display device, and storage medium
CN111460334A (en) Information display method and device and electronic equipment
CN114359362A (en) House resource information acquisition method and device and electronic equipment
CN113096194B (en) Method, device, terminal and non-transitory storage medium for determining time sequence
CN114554108B (en) Image processing method and device and electronic equipment
CN111782050B (en) Image processing method and apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination