CN111918049B - Three-dimensional imaging method and device, electronic equipment and storage medium - Google Patents

Three-dimensional imaging method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111918049B
CN111918049B CN202010818526.6A CN202010818526A CN111918049B CN 111918049 B CN111918049 B CN 111918049B CN 202010818526 A CN202010818526 A CN 202010818526A CN 111918049 B CN111918049 B CN 111918049B
Authority
CN
China
Prior art keywords
image
camera
training
dimensional
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010818526.6A
Other languages
Chinese (zh)
Other versions
CN111918049A (en
Inventor
黄仲华
周成富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Schen Industrial Investment Co ltd
Original Assignee
Guangdong Schen Industrial Investment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Schen Industrial Investment Co ltd filed Critical Guangdong Schen Industrial Investment Co ltd
Priority to CN202010818526.6A priority Critical patent/CN111918049B/en
Publication of CN111918049A publication Critical patent/CN111918049A/en
Application granted granted Critical
Publication of CN111918049B publication Critical patent/CN111918049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The application discloses a three-dimensional imaging method and device, electronic equipment and a storage medium. The specific implementation scheme is as follows: the method comprises the steps that a camera is used for conducting primary shooting on a first object to obtain a first image; taking the first image as a reference, adjusting the shooting position and working parameters of a camera at least once, and shooting the first object by utilizing the shooting after the camera is adjusted every time to obtain a plurality of second images; inputting the plurality of second images into a first image training model, wherein the first image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: images of a second object at a plurality of angles and three-dimensional image information of the second object; obtaining three-dimensional image information of the first object output by the first image training model from the first image training model; and constructing a three-dimensional image of the first object according to the three-dimensional image information of the first object. The embodiment of the application can improve the efficiency and accuracy of three-dimensional imaging.

Description

Three-dimensional imaging method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of three-dimensional imaging technologies, and in particular, to a method and an apparatus for three-dimensional imaging, an electronic device, and a storage medium.
Background
Generally, a three-dimensional image of an object is obtained by taking images of the object at various angles by a camera and combining the taken images by a computer to obtain a 360-degree imaged three-dimensional image.
However, in the case of pictures taken at various angles with respect to an object, the angles are too random, and the sharpness and exposure of each picture taken are different. When the pictures with different shooting conditions are input into a computer to be synthesized, the synthesis effect is always more common. Sometimes it is necessary to go back to take a picture of multiple angles again and to resynthesize. Thus, the synthesis efficiency is low.
Disclosure of Invention
The present application provides a method, an apparatus, an electronic device and a storage medium for three-dimensional imaging, so as to solve or alleviate one or more technical problems in the prior art.
According to an aspect of the present application, there is provided a method of three-dimensional imaging, comprising:
the method comprises the steps that a camera is used for conducting primary shooting on a first object to obtain a first image;
taking the first image as a reference, adjusting the shooting position and working parameters of a camera at least once, and shooting the first object by using the camera after the camera is adjusted every time to obtain a plurality of second images;
inputting the plurality of second images into a first image training model, wherein the first image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: images of a second object at a plurality of angles and three-dimensional image information of the second object;
obtaining three-dimensional image information of the first object output by the first image training model from the first image training model;
and constructing a three-dimensional image of the first object according to the three-dimensional image information of the first object.
In some embodiments, the adjusting the camera position and the operating parameters at least once based on the first image, and capturing the first object with the camera after each adjustment of the camera to obtain a plurality of second images includes:
according to the first image, determining the position and working parameters of the camera when the camera obtains the first image, and determining the size information of the first object;
determining a matrix for adjusting the camera according to the position and working parameters of the camera when the camera obtains the first image through shooting and the size information of the first object, wherein the numerical values of the rows and columns of the matrix represent the position of the camera, and the numerical value of each element in the matrix represents the working parameters of the camera;
and successively adjusting the camera according to each element in the matrix, and shooting the first object by using the adjusted camera after each camera adjustment to obtain a second image corresponding to each element in the matrix.
In some embodiments, after obtaining the second image, the method further comprises:
identifying whether a background or other objects are included in the second image in addition to the first object;
if the second image includes a background or other object in addition to the first object, the second image is processed to obtain a second image including only the first object.
In some embodiments, said processing said second image if said second image includes a background or other object in addition to said first object comprises:
inputting the second image into a second image training model, wherein the second image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: an image including a third object and a background or other object and an image including only the third object;
and acquiring a second image which only comprises the first object and is output by the second image training model from the second image training model.
In some embodiments, said constructing a three-dimensional image of said first object from three-dimensional image information of said first object comprises:
acquiring three-dimensional coordinates and pixels of each pixel point forming the three-dimensional image of the first object from the three-dimensional image information of the first object;
inputting the three-dimensional coordinates and the pixels of each pixel point into a third image training model, wherein the third image training model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups of training data comprises: the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training and the three-dimensional image used for training are obtained;
and acquiring a three-dimensional image output by the third image training model from the third image training model.
According to an aspect of the present application, there is provided an apparatus for three-dimensional imaging, including:
the first image acquisition module is used for carrying out preliminary shooting on a first object by utilizing a camera to obtain a first image;
the second image acquisition module is used for adjusting the shooting position and working parameters of a camera at least once by taking the first image as a reference, and shooting the first object by using the camera after the camera is adjusted every time to obtain a plurality of second images;
a first model input module, configured to input the plurality of second images into a first image training model, where the first image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data includes: images of a second object at a plurality of angles and three-dimensional image information of the second object;
the three-dimensional image information acquisition module is used for acquiring the three-dimensional image information of the first object output by the first image training module from the first image training model;
and the three-dimensional image acquisition module is used for constructing a three-dimensional image of the first object according to the three-dimensional image information of the first object.
In some embodiments, the second image acquisition module comprises:
the first image information acquisition unit is used for determining the position and working parameters of the camera when the camera obtains the first image through shooting and determining the size information of the first object according to the first image;
a matrix determining unit, configured to determine a matrix for adjusting the image capture according to a position and working parameters of the first image obtained by the camera during shooting and size information of the first object, where values of rows and columns of the matrix represent the position of the camera, and a value of each element in the matrix represents the working parameters of the camera;
and the second image shooting unit is used for adjusting the camera one by one according to each element in the matrix, and shooting the first object by using the adjusted camera after each camera adjustment to obtain a second image corresponding to each element in the matrix.
In some embodiments, the three-dimensional image acquisition module comprises:
an information acquisition unit, configured to acquire three-dimensional coordinates and pixels of each pixel point that forms a three-dimensional image of the first object from three-dimensional image information of the first object;
a third model input unit, configured to input the three-dimensional coordinates and pixels of each pixel point into a third image training model, where the third image training model is obtained by training multiple sets of training data, and each set of training data in the multiple sets of training data includes: the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training and the three-dimensional image used for training are obtained;
and the three-dimensional image output unit is used for acquiring the three-dimensional image output by the third image training model from the third image training model.
According to an aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method provided by any of the embodiments of the present application.
According to an aspect of the present application, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method provided by an embodiment of the present application.
According to the embodiment of the application, the shooting position of the camera and the working parameters during shooting can be accurately adjusted for multiple times to obtain multiple pictures meeting the synthesis requirement. In addition, the invention does not directly adopt a picture synthesis mode, but filters information of a plurality of two-dimensional pictures to obtain three-dimensional image information, and then constructs a three-dimensional image by utilizing the three-dimensional image information. Therefore, the method is more efficient and accurate.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are not to be considered limiting of its scope.
FIG. 1 is a schematic flow chart diagram of a method of three-dimensional imaging in an embodiment of the present application;
FIG. 2 is a schematic diagram of a second image acquisition process according to an embodiment of the application;
FIG. 3 is a schematic diagram of a three-dimensional imaging construction process according to an embodiment of the present application;
FIG. 4 is a block diagram of an apparatus for three-dimensional imaging provided by an embodiment of the present application;
FIG. 5 is a block diagram of an electronic device of a method of three-dimensional imaging according to an embodiment of the application;
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Fig. 1 shows a schematic flow chart of a method of three-dimensional imaging according to an embodiment of the present application. The embodiment of the application can be applied to a studio, a camera box or other camera devices, and can shoot any object and acquire a three-dimensional image. As shown in fig. 1, the method may include:
s100, carrying out primary shooting on a first object by using a camera to obtain a first image;
step S200, taking the first image as a reference, adjusting the shooting position and working parameters of the camera at least once, and shooting the first object by using the camera after the camera is adjusted every time to obtain a plurality of second images;
step S300, inputting a plurality of second images into a first image training model, wherein the first image training model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups of training data comprises: images of a plurality of angles of the second object and three-dimensional image information of the second object;
s400, obtaining three-dimensional image information of a first object output by a first image training model from the first image training model;
and S500, constructing a three-dimensional image of the first object according to the three-dimensional image information of the first object.
In the embodiment of the application, the position and the working parameters of the camera for shooting the object subsequently can be adjusted successively by utilizing the image shot by the camera for the first time, so that the second images in multiple angles meeting the standard can be obtained, and the shooting efficiency is improved. Then, after the second images of the plurality of angles are obtained, the information of the three-dimensional image forming the object, namely the three-dimensional image information of the first object, is extracted by using the second images of the plurality of angles, and then the three-dimensional image is constructed by using the three-dimensional image information, so that the method is more efficient and accurate.
In step S100, the first object is only used to distinguish the second object, and there is no order, which represents the object to be photographed to obtain the three-dimensional image thereof, and the object may be a static object or an animal. The initial shooting of the first object may be random, or the first object may be shot according to the current position of the camera and the set operating parameters. This makes it possible to eliminate the need for camera adjustment in the preliminary shot.
The camera device of the embodiment of the application can comprise a camera, a moving mechanism for controlling the camera to ascend, descend and rotate, and a processor. The processor can control the lifting and the rotation of the moving mechanism to adjust the shooting position and can adjust the working parameters of the camera in shooting the first object at different positions. The operating parameters may include focus, exposure level, white balance, light compensation, etc.
The training model can be a neural network model in machine learning. Neural networks (Neural networks NN) are complex network systems formed by a large number of simple processing units (called neurons) widely interconnected, reflect many basic features of human brain functions, and are highly complex nonlinear dynamical learning systems. The neural network has the capabilities of large-scale parallel, distributed storage and processing, self-organization, self-adaptation and self-learning, and is particularly suitable for processing the inaccurate and fuzzy information processing problems which need to consider many factors and conditions simultaneously. Neural network models are described based on mathematical models of neurons. Artificial neural networks (Artificial nural Newtokrs) are a description of the first-order characteristics of the human brain system. Briefly, it is a mathematical model. The neural network model is represented by the characteristics of the network topology nodes and learning rules.
In the embodiment of the present application, images including multiple angles of a set object, for example, a second object, are input into a training model as input data, each set of input training data includes images of multiple angles of a certain object and three-dimensional image information of the object, and the objects corresponding to each set may be different or the same. And training the images of the object at multiple angles by taking the three-dimensional image information of the corresponding object as supervision data to obtain output data. Thereby completing the training of the training model. The trained model may be applied to step S300.
Of course, in some embodiments, the accuracy or precision of the training model may be verified, and if the accuracy of the output data drops below a set threshold, the training model may be retrained or the training level may be deepened. The training data may call up historical data of the camera or data entered by other means.
In the step S400, the three-dimensional image information may include three-dimensional coordinates and pixels of each pixel point. Which can record the size, shape, color, etc. of the object.
In the above step S200, as shown in fig. 2, the process of adjusting the camera and the process of obtaining the second image may include the following steps:
step S210, according to the first image, determining the position and working parameters of the camera when the camera obtains the first image, and determining the size information of the first object;
step S220, determining a matrix for adjusting the camera according to the position and the working parameters of the camera when the camera obtains the first image in the shooting process and the size information of the first object, wherein the numerical values of the rows and the columns of the matrix represent the position of the camera, and the numerical value of each element in the matrix represents the working parameters of the camera;
and step S230, adjusting the camera one by one according to each element in the matrix, and shooting the first object by using the adjusted camera after each camera adjustment to obtain a second image corresponding to each element in the matrix.
In the embodiment of the application, through the first image, the position of the camera during shooting, for example, the lifting position and the rotating position of the moving machine for moving the camera, and the working parameters, the focus, the exposure condition and the light balance condition of the camera during shooting can be known. Of course, the environment in which the first image is located may also be known. If other objects are included in the environment, the other objects can be prevented from entering the mirror by adjusting the position of the camera and the distance of the focus of the camera. And determining the size information of the first object according to the first image and the position of the camera.
In some embodiments, in order to more accurately obtain the camera or the matrix for adjusting the camera, when the first image is captured, the image capturing position and the operating parameters for capturing the first image may also be adjusted to a certain extent, and the first image may also be captured as much as possible with respect to objects around the first object, background objects, and the like. So that when the second image is captured in step S200, the position or operating parameters of the camera may be adjusted to exclude objects or backgrounds around the first object.
For a matrix, the rows of the matrix may represent the height at which the camera is located, and the columns of the matrix may represent the rotation angle or rotated position at which the camera is located. Of course, the meanings indicated by the rows and columns of the matrix can be interchanged. The value for an element located in a column in a row in the matrix may represent the operating parameter when the camera is located at the position indicated by the column in the row.
The range of rows and columns of the matrix and the distance between two adjacent rows or two columns can be determined according to the size information of the first object. The size information of the first object may include a height, a width, and a thickness of the first object. For an unspecified object, the height may be the maximum height of the body shape, the width may be the maximum width of the body shape, and the thickness may be the maximum thickness of the body shape.
As for the working parameters of each element in the matrix, the working parameters of the adjacent elements and the distance between the adjacent elements can be referred to for adjustment, so that the adjusted working parameters are more accurate, and the effect graph of the obtained second image is better.
In order to avoid that the second image contains other objects or background information to improve the accuracy of the three-dimensional image information of the first object obtained from the second image, after the second image is obtained, the method provided by the embodiment of the present application further includes: identifying whether a background or other object is included in the second image in addition to the first object; and if the second image includes a background or other objects in addition to the first object, processing the second image to obtain a second image including only the first object.
In the embodiment of the application, the second image can be identified by utilizing the first image. When the first image is captured, the first object and the environment around the first object can be recognized from the first image. Thus, the second image may be identified by the first image or by the environment around the first object being identified from the first image, whether it includes other objects or background outside the first object, etc., and if so, the second image may need to be processed or filtered to obtain the second image including only the first object.
Wherein the neural network model may be utilized to process a second image containing other objects or backgrounds. Specifically, the following: inputting a second image into a second image training model, wherein the second image training model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups of training data comprises: an image including a third object and a background or other object and an image including only the third object; and acquiring a second image which only comprises the first object and is output by the second image training model from the second image training model.
In the embodiment of the present application, the second image training model is similar to the first image training model and is also a neural network model, which is not described herein again.
The training for the second image training model may be as follows: and inputting images containing a third object, a background or other objects into the second image training model, wherein each set of input training data comprises two images, one image comprises the image of one object, the background or other objects, and the other image is the object. In this embodiment, it is exemplified that the image in the set of data includes a third object. And training the image containing the third object, the background or other objects by using the image only containing the third object as supervision data to obtain output data. And comparing the supervision data with the output data, and adjusting the model parameters of the second image training model, so that the second image training model with the full precision requirement can be obtained by continuous training.
In this embodiment, the three-dimensional image information may include three-dimensional coordinates and pixels of each pixel point of the first object. Thus, with these data, a three-dimensional image can be constructed. Wherein in one embodiment, a three-dimensional image can be constructed with a neural network model, see fig. 3, the method comprising:
step S510, acquiring three-dimensional coordinates and pixels of each pixel point forming the three-dimensional image of the first object from the three-dimensional image information of the first object;
step S520, inputting the three-dimensional coordinates and the pixels of each pixel point into a third image training model, wherein the third image training model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups of training data comprises: the three-dimensional coordinates and pixels of each pixel point in the three-dimensional images used for training and the three-dimensional images used for training;
and step S530, acquiring the three-dimensional image output by the third image training model from the third image training model.
In this embodiment of the application, the third image training model may be trained in advance, so that the three-dimensional coordinates and pixels of the first pixel point in the three-dimensional image used for training are input into the third image training model as input data, each set of input training data includes the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training and the three-dimensional image used for training, and the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training are trained by using the three-dimensional image used for training as supervision data to obtain output data. And comparing the supervision data with the output data, and adjusting the model parameters of the third image training model, so that the third image training model meeting the precision requirement can be obtained by continuous training.
Referring to fig. 4, an embodiment of the present application further provides an apparatus for three-dimensional imaging, including:
a first image obtaining module 100, configured to perform preliminary shooting on a first object by using a camera to obtain a first image;
a second image obtaining module 200, configured to adjust the shooting position and the working parameters of the camera at least once based on the first image, and shoot the first object with the camera after each adjustment of the camera, so as to obtain a plurality of second images;
the first model input module 300 is configured to input a plurality of second images into a first image training model, where the first image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data includes: images of a plurality of angles of the second object and three-dimensional image information of the second object;
a three-dimensional image information obtaining module 400, configured to obtain, from the first image training model, three-dimensional image information of the first object output by the first image training model;
the three-dimensional image obtaining module 500 is configured to construct a three-dimensional image of the first object according to the three-dimensional image information of the first object.
In one embodiment, the second image acquisition module 200 comprises:
the first image information acquisition unit is used for determining the position and working parameters of the camera when the camera obtains the first image and determining the size information of the first object according to the first image;
the matrix determining unit is used for determining a matrix for adjusting the camera according to the position and the working parameters of the camera when the camera obtains the first image in the shooting process and the size information of the first object, wherein the numerical values of the rows and the columns of the matrix represent the position of the camera, and the numerical value of each element in the matrix represents the working parameters of the camera;
and the second image shooting unit is used for gradually adjusting the camera according to each element in the matrix, and shooting the first object by using the adjusted camera after each adjustment of the camera to obtain a second image corresponding to each element in the matrix.
In one embodiment, the three-dimensional image acquisition module 500 includes:
the information acquisition unit is used for acquiring three-dimensional coordinates and pixels of each pixel point forming the three-dimensional image of the first object from the three-dimensional image information of the first object;
the third model input unit is used for inputting the three-dimensional coordinates and the pixels of each pixel point into a third image training model, wherein the third image training model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups of training data comprises: the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training and the three-dimensional image used for training;
and the three-dimensional image output unit is used for acquiring the three-dimensional image output by the third image training model from the third image training model.
As shown in fig. 5, it is a block diagram of an electronic device of a method of three-dimensional imaging according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, if desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). One processor 601 is illustrated in fig. 5.
The memory 602 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of three-dimensional imaging provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of three-dimensional imaging provided herein.
The memory 602, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of three-dimensional imaging in the embodiment of the present application (for example, the communication establishing module 110, the request receiving module 120, the seed sending module 130, the key receiving module 140, the key comparison module 150, and the first state setting module 160 shown in fig. 3). The processor 601 executes various functional applications of the server and data processing, i.e., implementing the method of three-dimensional imaging in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an electronic device of a method of three-dimensional imaging, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, and these remote memories may be connected over a network to the electronic device of the gateway access method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the gateway access method may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the three-dimensional imaging method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method of three-dimensional imaging, comprising:
the method comprises the steps that a camera is used for conducting primary shooting on a first object to obtain a first image;
taking the first image as a reference, adjusting the shooting position and working parameters of a camera at least once, and shooting the first object by using the camera after the camera is adjusted every time to obtain a plurality of second images;
inputting the plurality of second images into a first image training model, wherein the first image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: images of a second object at a plurality of angles and three-dimensional image information of the second object;
obtaining three-dimensional image information of the first object output by the first image training model from the first image training model;
constructing a three-dimensional image of the first object according to the three-dimensional image information of the first object;
wherein, the step of taking the first image as a reference, adjusting the shooting position and the working parameters of the camera at least once, and shooting the first object by using the camera after adjusting the camera each time to obtain a plurality of second images comprises:
according to the first image, determining the position and working parameters of the camera when the camera obtains the first image, and determining the size information of the first object;
determining a matrix for adjusting the camera according to the position and working parameters of the camera when the camera obtains the first image through shooting and the size information of the first object, wherein the numerical values of the rows and columns of the matrix represent the position of the camera, the rows of the matrix represent the height of the camera, the columns of the matrix represent the rotation angle of the camera, the ranges of the rows and columns of the matrix are determined according to the size information of the first object, and the numerical value of each element in the matrix represents the working parameters of the camera;
the camera is adjusted one by one according to each element in the matrix, and after the camera is adjusted each time, the adjusted camera is used for shooting the first object to obtain a second image corresponding to each element in the matrix;
when the first image is shot, the shooting position and the working parameters for shooting the first image are adjusted, so that objects or background objects around the first object are shot in the first image, and the objects or background objects around the first object can be excluded by adjusting the position or the working parameters of the camera when the second image is shot.
2. The method of claim 1, wherein after obtaining the second image, the method further comprises:
identifying whether a background or other objects are included in the second image in addition to the first object;
if the second image includes a background or other object in addition to the first object, the second image is processed to obtain a second image including only the first object.
3. The method of claim 2, wherein said processing said second image if said second image includes a background or other object in addition to said first object comprises:
inputting the second image into a second image training model, wherein the second image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data comprises: an image including a third object and a background or other object and an image including only the third object;
and acquiring a second image which only comprises the first object and is output by the second image training model from the second image training model.
4. The method of claim 1, wherein constructing the three-dimensional image of the first object from the three-dimensional image information of the first object comprises:
acquiring three-dimensional coordinates and pixels of each pixel point forming the three-dimensional image of the first object from the three-dimensional image information of the first object;
inputting the three-dimensional coordinates and the pixels of each pixel point into a third image training model, wherein the third image training model is obtained by training a plurality of groups of training data, and each group of training data in the plurality of groups of training data comprises: the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training and the three-dimensional image used for training are obtained;
and acquiring the three-dimensional image output by the third image training model from the third image training model.
5. An apparatus for three-dimensional imaging, comprising:
the first image acquisition module is used for carrying out preliminary shooting on a first object by utilizing a camera to obtain a first image;
the second image acquisition module is used for adjusting the shooting position and working parameters of a camera at least once by taking the first image as a reference, and shooting the first object by using the camera after the camera is adjusted every time to obtain a plurality of second images;
a first model input module, configured to input the plurality of second images into a first image training model, where the first image training model is obtained by training a plurality of sets of training data, and each set of training data in the plurality of sets of training data includes: images of a second object at a plurality of angles and three-dimensional image information of the second object;
the three-dimensional image information acquisition module is used for acquiring the three-dimensional image information of the first object output by the three-dimensional image information acquisition module from the first image training model;
the three-dimensional image acquisition module is used for constructing a three-dimensional image of the first object according to the three-dimensional image information of the first object;
wherein the second image acquisition module comprises:
the first image information acquisition unit is used for determining the position and working parameters of the camera when the camera obtains the first image through shooting and determining the size information of the first object according to the first image;
a matrix determining unit, configured to determine a matrix for adjusting the camera according to a position and working parameters where the camera is located when the camera takes the first image and size information of the first object, where rows and columns of the matrix indicate the position where the camera is located, rows of the matrix indicate a height where the camera is located, columns of the matrix indicate a rotation angle where the camera is located, ranges of the rows and columns of the matrix are determined according to the size information of the first object, and a value of each element in the matrix indicates the working parameters of the camera;
the second image shooting unit is used for adjusting the camera one by one according to each element in the matrix, and shooting the first object by using the adjusted camera after adjusting the camera each time to obtain a second image corresponding to each element in the matrix;
when the first image is shot, the shooting position and the working parameters for shooting the first image are adjusted, so that objects or background objects around the first object are also shot in the first image, and when the second image is shot, the objects or the background around the first object are excluded by adjusting the position or the working parameters of the camera.
6. The apparatus of claim 5, wherein the three-dimensional image acquisition module comprises:
an information acquisition unit, configured to acquire three-dimensional coordinates and pixels of each pixel point that forms a three-dimensional image of the first object from three-dimensional image information of the first object;
a third model input unit, configured to input the three-dimensional coordinates and pixels of each pixel point into a third image training model, where the third image training model is obtained by training multiple sets of training data, and each set of training data in the multiple sets of training data includes: the three-dimensional coordinates and pixels of each pixel point in the three-dimensional image used for training and the three-dimensional image used for training are obtained;
and the three-dimensional image output unit is used for acquiring the three-dimensional image output by the third image training model from the third image training model.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
8. A non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1-4.
CN202010818526.6A 2020-08-14 2020-08-14 Three-dimensional imaging method and device, electronic equipment and storage medium Active CN111918049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010818526.6A CN111918049B (en) 2020-08-14 2020-08-14 Three-dimensional imaging method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010818526.6A CN111918049B (en) 2020-08-14 2020-08-14 Three-dimensional imaging method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111918049A CN111918049A (en) 2020-11-10
CN111918049B true CN111918049B (en) 2022-09-06

Family

ID=73283135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010818526.6A Active CN111918049B (en) 2020-08-14 2020-08-14 Three-dimensional imaging method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111918049B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671402A (en) * 2022-08-22 2024-03-08 华为技术有限公司 Recognition model training method and device and mobile intelligent equipment

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005227929A (en) * 2004-02-12 2005-08-25 Nippon Telegr & Teleph Corp <Ntt> Processing method for photography image of object, image display system, program and recording medium
CN104539931A (en) * 2014-12-05 2015-04-22 北京格灵深瞳信息技术有限公司 Multi-ocular camera system, device and synchronization method
CN107103641A (en) * 2017-03-23 2017-08-29 微景天下(北京)科技有限公司 Three dimensional reconstructive CT system and three dimensional reconstructive CT method
CN107360364A (en) * 2017-06-28 2017-11-17 维沃移动通信有限公司 A kind of image capturing method and master mobile terminal
CN107465907A (en) * 2017-08-16 2017-12-12 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107516333A (en) * 2016-06-17 2017-12-26 长春理工大学 Adaptive De Bruijn color structured light coding methods
CN108495045A (en) * 2018-05-17 2018-09-04 Oppo广东移动通信有限公司 Image capturing method, device, electronic device and storage medium
CN108615256A (en) * 2018-03-29 2018-10-02 西南民族大学 A kind of face three-dimensional rebuilding method and device
CN109190469A (en) * 2018-07-27 2019-01-11 阿里巴巴集团控股有限公司 A kind of detection method and device, a kind of calculating equipment and storage medium
CN109255843A (en) * 2018-09-26 2019-01-22 联想(北京)有限公司 Three-dimensional rebuilding method, device and augmented reality AR equipment
CN110060331A (en) * 2019-03-14 2019-07-26 杭州电子科技大学 Three-dimensional rebuilding method outside a kind of monocular camera room based on full convolutional neural networks
CN110113538A (en) * 2019-06-12 2019-08-09 北京百度网讯科技有限公司 Intelligent capture apparatus, intelligent control method and device
CN110163953A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Three-dimensional facial reconstruction method, device, storage medium and electronic device
WO2020042800A1 (en) * 2018-08-31 2020-03-05 阿里巴巴集团控股有限公司 Auxiliary method for capturing damage assessment image of vehicle, device, and apparatus
WO2020098686A1 (en) * 2018-11-16 2020-05-22 广州市百果园信息技术有限公司 Face detection model training method and apparatus, and face key point detection method and apparatus
CN111279352A (en) * 2017-10-27 2020-06-12 汉阳大学校产学协力团 Three-dimensional information acquisition system through ball throwing exercise and camera parameter calculation method
CN111432195A (en) * 2020-03-30 2020-07-17 维沃移动通信(杭州)有限公司 Image shooting method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110298897A1 (en) * 2010-06-08 2011-12-08 Iva Sareen System and method for 3d virtual try-on of apparel on an avatar
US9396577B2 (en) * 2012-02-16 2016-07-19 Google Inc. Using embedded camera parameters to determine a position for a three-dimensional model
JP6572099B2 (en) * 2015-11-06 2019-09-04 キヤノン株式会社 Imaging apparatus, control method therefor, and program
JP6733267B2 (en) * 2016-03-31 2020-07-29 富士通株式会社 Information processing program, information processing method, and information processing apparatus

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005227929A (en) * 2004-02-12 2005-08-25 Nippon Telegr & Teleph Corp <Ntt> Processing method for photography image of object, image display system, program and recording medium
CN104539931A (en) * 2014-12-05 2015-04-22 北京格灵深瞳信息技术有限公司 Multi-ocular camera system, device and synchronization method
CN107516333A (en) * 2016-06-17 2017-12-26 长春理工大学 Adaptive De Bruijn color structured light coding methods
CN107103641A (en) * 2017-03-23 2017-08-29 微景天下(北京)科技有限公司 Three dimensional reconstructive CT system and three dimensional reconstructive CT method
CN107360364A (en) * 2017-06-28 2017-11-17 维沃移动通信有限公司 A kind of image capturing method and master mobile terminal
CN107465907A (en) * 2017-08-16 2017-12-12 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN111279352A (en) * 2017-10-27 2020-06-12 汉阳大学校产学协力团 Three-dimensional information acquisition system through ball throwing exercise and camera parameter calculation method
CN108615256A (en) * 2018-03-29 2018-10-02 西南民族大学 A kind of face three-dimensional rebuilding method and device
CN108495045A (en) * 2018-05-17 2018-09-04 Oppo广东移动通信有限公司 Image capturing method, device, electronic device and storage medium
CN109190469A (en) * 2018-07-27 2019-01-11 阿里巴巴集团控股有限公司 A kind of detection method and device, a kind of calculating equipment and storage medium
WO2020042800A1 (en) * 2018-08-31 2020-03-05 阿里巴巴集团控股有限公司 Auxiliary method for capturing damage assessment image of vehicle, device, and apparatus
CN109255843A (en) * 2018-09-26 2019-01-22 联想(北京)有限公司 Three-dimensional rebuilding method, device and augmented reality AR equipment
WO2020098686A1 (en) * 2018-11-16 2020-05-22 广州市百果园信息技术有限公司 Face detection model training method and apparatus, and face key point detection method and apparatus
CN110163953A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Three-dimensional facial reconstruction method, device, storage medium and electronic device
CN110060331A (en) * 2019-03-14 2019-07-26 杭州电子科技大学 Three-dimensional rebuilding method outside a kind of monocular camera room based on full convolutional neural networks
CN110113538A (en) * 2019-06-12 2019-08-09 北京百度网讯科技有限公司 Intelligent capture apparatus, intelligent control method and device
CN111432195A (en) * 2020-03-30 2020-07-17 维沃移动通信(杭州)有限公司 Image shooting method and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
国产落叶松规格材销槽承压试验研究;刘柯珍等;《安徽农业科学》;20110615(第17期);全文 *
基于HOG/PCA/SVM的跨年龄人脸识别算法;彭思江等;《吉首大学学报(自然科学版)》;20180925(第05期);全文 *

Also Published As

Publication number Publication date
CN111918049A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN112541963B (en) Three-dimensional avatar generation method, three-dimensional avatar generation device, electronic equipment and storage medium
JP6504590B2 (en) System and computer implemented method for semantic segmentation of images and non-transitory computer readable medium
CN111753961B (en) Model training method and device, prediction method and device
CN111598818A (en) Face fusion model training method and device and electronic equipment
CN109191512A (en) The depth estimation method and device of binocular image, equipment, program and medium
WO2022179581A1 (en) Image processing method and related device
CN111968203B (en) Animation driving method, device, electronic equipment and storage medium
CN111753908A (en) Image classification method and device and style migration model training method and device
CN112241716B (en) Training sample generation method and device
CN108875931A (en) Neural metwork training and image processing method, device, system
CN112489102A (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN111507924B (en) Video frame processing method and device
CN111784757B (en) Training method of depth estimation model, depth estimation method, device and equipment
CN113205560A (en) Calibration method, device and equipment of multi-depth camera and storage medium
CN111488821B (en) Method and device for identifying countdown information of traffic signal lamp
CN107959798A (en) Video data real-time processing method and device, computing device
CN111710008B (en) Method and device for generating people stream density, electronic equipment and storage medium
CN111918049B (en) Three-dimensional imaging method and device, electronic equipment and storage medium
CN115222896A (en) Three-dimensional reconstruction method and device, electronic equipment and computer-readable storage medium
CN111523467B (en) Face tracking method and device
CN110472600A (en) The identification of eyeground figure and its training method, device, equipment and storage medium
CN111986263A (en) Image processing method, image processing device, electronic equipment and storage medium
US20230164446A1 (en) Imaging exposure control method and apparatus, device and storage medium
CN111524165A (en) Target tracking method and device
CN115841520A (en) Camera internal reference calibration method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant