CN108550182B - Three-dimensional modeling method and terminal - Google Patents

Three-dimensional modeling method and terminal Download PDF

Info

Publication number
CN108550182B
CN108550182B CN201810214667.XA CN201810214667A CN108550182B CN 108550182 B CN108550182 B CN 108550182B CN 201810214667 A CN201810214667 A CN 201810214667A CN 108550182 B CN108550182 B CN 108550182B
Authority
CN
China
Prior art keywords
sub
focusing
depth
area
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810214667.XA
Other languages
Chinese (zh)
Other versions
CN108550182A (en
Inventor
侯海军
王富明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810214667.XA priority Critical patent/CN108550182B/en
Publication of CN108550182A publication Critical patent/CN108550182A/en
Application granted granted Critical
Publication of CN108550182B publication Critical patent/CN108550182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a three-dimensional modeling method and a terminal, wherein the method comprises the following steps: carrying out focusing operation for N times on the surface of an object corresponding to a preview image acquired by a camera; in the process of the N times of focusing operation, acquiring depth-of-field data of M sub-areas of the preview image; and constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data. According to the invention, the acquisition of the depth of field data of the preview image can be completed only through the terminal camera, so that the hardware cost is reduced; the focus is carried out on the surface of the object corresponding to the preview image to acquire the depth of field data, so that the influence of external light on the acquisition of the depth of field data can be avoided, and the acquisition precision of the depth of field data is improved.

Description

Three-dimensional modeling method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a three-dimensional modeling method and a terminal.
Background
With the advancement of technology, 3D technology has been applied to various electronic devices. The 3D face recognition is one of 3D technology applications, and can recognize a 3D image of a face, thereby accurately recognizing the face.
The 3D face recognition needs to acquire depth-of-field data of the face, and the face can be recognized in a 3D mode according to the depth-of-field data. Therefore, in order to improve the accuracy of face recognition, the depth-of-field data of the face needs to be accurately collected.
At present, a binocular stereo vision fusion technology, a structured light three-dimensional vision technology, a Time of flight (TOF) method and the like can be adopted for 3D image modeling. The binocular stereo vision fusion technique can fuse images obtained by two eyes and observe differences between the images to obtain obvious depth perception, thereby establishing a corresponding relation between characteristics, and corresponding mapping points of the same space physical point in different images to establish a three-dimensional model. However, the binocular stereoscopic vision fusion technology has poor calculation accuracy, a complex algorithm and no work in a dark environment. Although the structured light three-dimensional vision technology and the TOF technology can work normally in a dark environment and have high precision, the structured light three-dimensional vision technology and the TOF technology are easily influenced by sunlight and have very high requirements on hardware and software.
Therefore, when the prior art is used for three-dimensional modeling, if hardware and software of the electronic equipment cannot meet requirements, the modeling process is easily influenced by external light, and the problem of poor precision of acquired depth-of-field data also exists.
Disclosure of Invention
The embodiment of the invention provides a three-dimensional modeling method and a terminal, which are used for solving the problems that the existing three-dimensional modeling is easily influenced by external light and the precision of acquired depth-of-field data is poor.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a three-dimensional modeling method is provided, which is applied to a terminal having a camera, and includes:
carrying out focusing operation for N times on the surface of an object corresponding to a preview image acquired by a camera;
in the process of the N times of focusing operation, acquiring depth-of-field data of M sub-areas of the preview image;
and constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data.
In a second aspect, a terminal device is provided, which includes:
the execution module is used for executing focusing operation for N times on the surface of an object corresponding to the preview image acquired by the camera;
the acquisition module is used for acquiring the depth of field data of the M sub-areas of the preview image in the process of the N times of focusing operation;
and the construction module is used for constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data.
In a third aspect, a terminal device is provided, the terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect.
In the embodiment of the invention, based on the preview image acquired by the terminal camera, N times of focusing operation can be executed on the surface of the object corresponding to the preview image so as to acquire the depth of field data of M sub-areas of the preview image, and a three-dimensional model of the surface of the object can be established based on the depth of field data; therefore, the acquisition of the depth of field data of the preview image can be completed only through the terminal camera, and the hardware cost is reduced; the focus is carried out on the surface of the object corresponding to the preview image to acquire the depth of field data, so that the influence of external light on the acquisition of the depth of field data can be avoided, and the acquisition precision of the depth of field data is improved.
Drawings
FIG. 1 is a flow chart of a three-dimensional modeling method of one embodiment of the present invention;
FIG. 2 is one of the flow diagrams of sub-steps of step 110 in FIG. 1;
FIG. 3 is one of the flow diagrams of sub-steps of step 120 in FIG. 1;
FIG. 4 is a second flowchart illustrating the sub-steps of step 110 of FIG. 1;
FIG. 5 is a second flowchart of the substeps of step 120 of FIG. 1;
FIG. 6 is a schematic illustration of depth of field data acquisition according to an embodiment of the present invention;
fig. 7 is a block diagram of a terminal device of one embodiment of the present invention;
fig. 8 is a block diagram of a terminal device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, a binocular vision image three-dimensional modeling mode can be adopted for three-dimensional modeling, and the mode is mainly used for processing of depth of field extraction, background blurring, 3D imaging and the like. In application, this approach is often poor in accuracy, complex in algorithm, and inoperable in dark environments.
The three-dimensional modeling can also be performed by using a structured light 3D vision principle and a TOF technology (the principle is that the reflection time difference of emitted light is calculated after the emitted infrared light is reflected by an object to convert the distance of the object), but the two methods are easily influenced by external light (such as light irradiated by the sun), and the requirements on hardware and software are very high.
Therefore, it is necessary to further improve the accuracy of collecting the depth-of-field data and avoid the influence of the external light on the collection of the depth-of-field data. Therefore, the invention provides a three-dimensional modeling method. Fig. 1 is a flowchart of a three-dimensional modeling method according to an embodiment of the present invention, applied to a terminal having a camera, as shown in fig. 1, the method including:
and step 110, performing focusing operation for N times on the surface of the object corresponding to the preview image acquired by the camera.
The camera can be an electronic component in various terminal devices such as a mobile terminal and a video camera.
When the camera is used for imaging, an image of the surface of a shot object, for example, an image of a human face, etc., is presented in the lens. The preview image is an image presented in a shot in the camera. The user can browse the preview image on the screen of the terminal device.
And step 120, acquiring depth-of-field data of M sub-areas of the preview image in the process of N times of focusing operation.
When the terminal device is imaging, the preview image is divided in a grid mode. The preview image may be divided into images of M sub-regions. The magnitude of the M value may be positively correlated with the pixel size of the camera. For example, a pixel of a camera in a cell phone may be 500 ten thousand pixels, 1000 ten thousand pixels, etc., and the larger the pixel of the camera, the larger the corresponding M value may be.
In this embodiment, the image size of each of the M sub-regions may be the same.
The depth of field (DOF) is a range of distance between the front and rear of the subject measured at the front edge of the camera in order to obtain a sharp image. The distance between the aperture, the lens, and the object is an important factor affecting the depth of field. Usually, after the focusing is completed, a clear image appears in a range before and after the focal point, and this range of distances in front and behind is called the depth of field.
When the camera is used for shooting, after focusing is successful, a clear image of a corresponding point of an object can be obtained. Therefore, by means of the focusing mode, when the object surface corresponding to the area image is successfully focused with the camera, the depth of field data of the area image can be determined.
Depth of field data is generally understood herein to be depth of field data when the camera is successfully focused on a point on the surface of an object. For the present embodiment, since the pixels of the camera are generally large, at least in the millions, the area image divided accordingly can be regarded as an extremely small point. The other dot is actually a dot formed in an extremely small area. The images of M sub-regions described in this embodiment are very small region images, and these region images can be regarded as corresponding points. When these area images are points, the object surfaces corresponding to these area images are necessarily corresponding points, and may be referred to as object surface points. Therefore, the depth data corresponding to the object surface points corresponding to each area image of the object surface can be correspondingly determined in a focusing manner in the embodiment.
In this embodiment, in the process of N times of focusing operations, all depth-of-field data of M sub-regions of the preview image may be acquired.
And step 130, constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data.
After the depth of field data of all object surface points on the object surface are acquired in a focusing mode, a three-dimensional model can be constructed according to the depth of field data.
A three-dimensional model is a three-dimensional morphology of the surface of an object constructed from depth-of-field data. Through the constructed three-dimensional model, the object can be identified through the three-dimensional model.
It is to be understood that the construction of the three-dimensional model can be implemented by different algorithms, and it should be noted that the present embodiment is not limited to these specific algorithms, and the implementation of these algorithms is within the scope of the present embodiment.
In the embodiment of the invention, based on the preview image acquired by the terminal camera, N times of focusing operation can be performed on the surface of the object corresponding to the preview image so as to obtain the depth of field data of M sub-areas of the preview image, and a three-dimensional model of the surface of the object can be established based on the depth of field data; therefore, the acquisition of the depth of field data of the preview image can be completed only through the terminal camera, and the hardware cost is reduced; the focus is carried out on the surface of the object corresponding to the preview image to acquire the depth of field data, so that the influence of external light on the acquisition of the depth of field data can be avoided, and the acquisition precision of the depth of field data is improved.
In an implementation manner of this embodiment, for a preview image in a terminal camera, when the preview image is divided into M sub-regions, the value of M may be set to be a larger value. The larger M value can correspondingly increase the quantity of the acquired depth-of-field data, thereby improving the precision of the constructed three-dimensional model.
Fig. 2 is one of the flow charts of the sub-steps of step 110 in fig. 1. As shown in fig. 2, step 110 includes:
step 111, controlling a motor in the camera to move from a preset first position to a preset second position;
and step 112, performing focusing operation on the surface of the object for N times in the moving process.
The first position and the second position may be set positions. Since the displacement of the motor has a certain limit. Thus, the first position may be an initial position of the motor. The initial position may be understood as a default position of the motor when the terminal device is powered on, and it may also be understood as a position where the initial position is zero, i.e. when the motor is not moving. The second position may be a position at which the motor is able to move by a maximum displacement amount. Assuming that the maximum distance that the motor can move is 1 mm, the movement of the motor from the first position to the second position may be 1 mm, i.e. the first position may be a zero position and the second position may be the position where the motor moves 1 mm.
When the motor moves from the first position to the second position, the camera can perform focusing operation on the surface of the object for N times.
Here, it should be emphasized that in the present embodiment, when the motor moves from the first position to the second position, it is necessary to ensure that all the M sub-region images of the preview image are successfully focused with the camera. Therefore, when the distance between the first position and the second position is the maximum displacement amount that the motor can move, it can be ensured that all the M sub-regions can be successfully focused with the camera.
In this embodiment, successful focusing means that any one of the M sub-areas is a focusing area of the camera, and correspondingly, the object surface corresponding to the image of the sub-area is a focusing position of the camera.
As an implementation manner of this embodiment, a corresponding number of focusing positions may be set at equal intervals between the first position and the second position according to the value of N, where the number of focusing positions may be N-2, and then the motor is controlled to sequentially move from the first position to the focusing positions and move straight to the second position, so that when the motor in the camera moves from the preset first position to the preset second position, N times of focusing operations are completed. I.e. one focusing operation per position.
In controlling the movement of the motor, the motor may be moved from a first position to a next focus position until moved to a second position. It will be appreciated that this manner of movement can be achieved by controlling the current applied to the motor. The number of in-focus positions may be positively correlated with the magnitude of the M value. The larger the M value is, the more times the object surface corresponding to the M sub-area images needs to be focused by the camera correspondingly. This requires a larger number of in-focus positions to be set. Therefore, the larger the value of M, the longer the time of the whole focusing process is increased accordingly, thereby prolonging the time of three-dimensional modeling. Therefore, the magnitude of the M value and the number of in-focus positions need to be set reasonably.
In this embodiment, when the motor moves from the first position to the second position, the camera can perform focusing operations for N times, and it is ensured that the focusing operations for N times can be enough to make the object surface corresponding to any one of the M sub-regions become a focusing region, so as to obtain the displacement of the motor, and finally obtain the depth-of-field data of the M sub-regions. A three-dimensional model of the surface of the object can be established based on the depth-of-field data, so that the depth-of-field data of the preview image can be acquired only through a terminal camera, and the hardware cost is reduced; the focus is carried out on the surface of the object corresponding to the preview image to acquire the depth of field data, so that the influence of external light on the acquisition of the depth of field data can be avoided, and the acquisition precision of the depth of field data is improved.
Fig. 3 is one of the flow charts of the sub-steps of step 120 in fig. 1. As shown in fig. 3, step 120 includes:
step 121, in the moving process, when the object surface corresponding to at least one of the M sub-areas of the preview image is a focusing area, acquiring a first displacement of a motor in the camera;
and step 122, calculating the depth of field data of at least one sub-area in the M sub-areas of the preview image according to the first displacement.
In this embodiment, when performing a focusing operation, when an object surface corresponding to any one of the M sub-regions of the preview image is a focusing region, a first displacement of a motor in the camera may be acquired, and corresponding depth-of-field data may be determined according to the first displacement.
It should be noted that, by setting the focusing position, the corresponding displacement amount of the motor and at least one of the M sub-areas successfully focused by the camera can be determined when the motor is at the first position, the focusing position, and the second position. Here, the setting of the focusing position needs to ensure that after the motor moves to the second position, all the M sub-area images are successfully focused with the camera.
The focusing process of the camera is essentially the process of driving related components to complete focusing when a motor in the camera moves. Therefore, the displacement of the motor in the camera has a corresponding functional relationship with the depth of field data of the object surface when focusing is successful. This functional relationship may have different expressions in different cameras, but this does not affect the implementation of the present embodiment. It should be noted that, in the present embodiment, when focusing is successful, the depth of field data corresponding to the object surface can be determined by the displacement of the motor in the camera.
It is understood that when the depth data is determined by the displacement of the motor, the calculation of the depth data is not affected by the interference of external light.
It is known that the movement of the motor is a corresponding movement of the motor by the terminal device by applying a corresponding current to the motor. Therefore, the calculation of the depth of field data can be completed by collecting corresponding current information or other electric quantity information and the like. However, it should be noted that, in the present embodiment, the corresponding depth of field data can be calculated more directly through the displacement of the motor, and the accuracy of the calculated depth of field data can be improved.
In an implementation manner of this embodiment, a corresponding displacement detection device may be disposed in the camera for detecting the displacement of the motor, so that when focusing is successful, the displacement of the motor may be directly read by the displacement detection device.
It can be known that the displacement detection device can be connected with a processor in the terminal device, so that when the processor judges that focusing is successful, the processor can directly read the displacement of the corresponding motor from the displacement detection device.
In this embodiment, the terminal device may be various electronic devices such as a mobile phone and a tablet computer. It is to be understood that the components of the terminal device are configured to fully implement the description of the present embodiment.
In another implementation manner of this embodiment, the camera may also be controlled to perform focusing operation N times according to the M sub-region images of the preview image, so as to obtain depth-of-field data of the M sub-region images. Fig. 4 is a second flowchart of the substeps of step 110 in fig. 1. As shown in fig. 4, step 110 includes:
step 113, determining at least one sub-area in the M sub-areas as a target sub-area according to a preset sequence;
and step 114, performing focusing operation on the object surface corresponding to the target sub-area.
In this embodiment, at least one sub-region may be determined from the M sub-regions as a target sub-region, and then the camera is controlled to perform focusing operation on the object surface corresponding to the target sub-region.
Wherein the values of N and M may be the same. Specifically, in this embodiment, a target sub-area may be sequentially determined, and a focusing operation may be performed on the object surface corresponding to the target sub-area.
In the whole focusing process of the embodiment, the object surface corresponding to each of the M sub-regions can complete the focusing operation with the camera.
Fig. 5 is a second flowchart of the substeps of step 120 of fig. 1. As shown in fig. 5, step 120 includes:
step 123, in the process of performing focusing operation on the object surface corresponding to the target sub-area, acquiring a second displacement of the motor in the camera when the object surface corresponding to the target sub-area is a focusing area;
step 124, calculating the depth of field data of the target subregion according to the second displacement amount.
In this embodiment, in the focusing process, when the object surface corresponding to the target sub-region is a focusing region, the second displacement of the motor at this time may be determined, and the depth data of the corresponding target sub-region may be calculated.
In this embodiment, when the three-dimensional model is constructed according to the depth-of-field data, the number of the plane heights of the object may be determined according to the size of the depth-of-field data. And if the depth data are the same in size, the depth data are in the same plane height. If the depth of field data is different in size, the corresponding depth of field data is not at the same plane height. After the number of the plane heights is determined, when the three-dimensional model is constructed by further combining the depth of field data, the corresponding depth of field data is determined only according to different plane heights, and the efficiency of constructing the three-dimensional model can be improved.
Fig. 6 is a schematic view of depth data acquisition according to an embodiment of the present invention, as shown in fig. 6, which is a simplified process of a model of a photographed object, having only two plane heights, for convenience of description. When calculating the depth data of the object, the top view image of the object presented in the camera may be first divided into M corresponding sub-area images. As shown in fig. 6, it may be divided into 27 (M is 27) sub-area images. It is to be understood that the numerals in the present embodiment are merely described by way of specific examples to more clearly understand the present embodiment. Therefore, the specific data described in the embodiment are only for understanding the embodiment, and are not specific limitations of the embodiment. The 27 area sub-images may be A1, A2, A3, \8230;, J1, J2, J3.
Thereafter, as one implementation, a first position and a second position of the motor may be set. The first position may be S0, and the second position may be Sn. When the motor moves from the first position to the second position, the camera can be controlled to focus on the M sub-area images, and when focusing is successful, the displacement of the motor is determined. Here, equally spaced focusing positions, such as S1, S2, \8230;, 8230;, may be provided between the first position and the second position. The focusing positions are set so that when the motor is at the first position, the focusing position and the second position, the corresponding displacement of at least one sub-area in the M sub-area images successfully focused by the camera and the motor can be determined. The focusing position can ensure that the M sub-area images are successfully focused with the camera after the motor moves to the second position.
For example, in the auto-focusing process, when the motor is advanced to the position Sk, the displacement amount of the motor is Sk. Assuming that the cameras successfully focus on the object surfaces corresponding to the images D1 to D3, E1 to E3, and F1 to F3, the depth-of-field data of the object surfaces corresponding to the sub-area images are: l = f (Sk). L denotes the depth of field, and f denotes a functional relationship between the depth of field and the amount of displacement.
When the motor advances to the position Sm during auto-focusing, the displacement amount of the motor at this time is Sm. Assuming that the cameras successfully focus on the object surfaces corresponding to A1 to A3, B1 to B3, C1 to C3, G1 to G3, H1 to H3, and J1 to J3, the depth of field of the object surface corresponding to the sub-area images is: l = f (Sm).
In another implementation of this embodiment, for 27 (M is 27) sub-region images: a1, A2, A3, \ 8230 \ 8230;, J1, J2, J3, the sub-area images D1-D3, E1-E3, F1-F3 can be selected first, then the camera can be controlled to focus with the sub-area images, and if the displacement of the motor is Sk when the focusing is successful, the depth of field data of the object surface corresponding to the sub-area images is as follows: l = f (Sk).
Then, the sub-area images A1 to A3, B1 to B3, C1 to C3, G1 to G3, H1 to H3, and J1 to J3 may be selected, and then the camera may be controlled to focus on these sub-area images, and assuming that the displacement of the motor is Sm when the focusing is successful, the depth of field of the object surface corresponding to these sub-area images is: l = f (Sm).
Of course, any one of the target sub-regions A1, A2, A3, 8230, J1, J2, and J3 may be selected in sequence, and then the camera may be controlled to perform a focusing operation with the target sub-region, and when the surface of the object corresponding to the target sub-region is a focusing region of the camera, the displacement amount of the motor may be determined. Further according to the displacement of the motor, the depth of field data of the target sub-area can be calculated.
In the embodiment, the displacement of the motor can be obtained in a focusing manner, and the corresponding depth-of-field data is calculated according to the displacement of the motor, so that the influence of external light can be avoided in the calculation process of the depth-of-field data. In darker environment, such as night, for the collection precision of improvement depth of field data, terminal equipment can focus under the assistance of infrared light filling lamp and infrared camera to accomplish the collection of depth of field data.
The three-dimensional modeling method according to the embodiment of the present invention is described in detail above with reference to fig. 1 to 6. The terminal device according to the embodiment of the present invention is described in detail below. Fig. 7 is a block diagram of a terminal device according to an embodiment of the present invention, and as shown in fig. 7, the terminal device 700 includes:
the execution module 710 is configured to execute focusing operations for N times on the surface of an object corresponding to the preview image acquired by the camera;
an obtaining module 720, configured to obtain depth-of-field data of M sub-regions of the preview image in the process of N focusing operations;
the building module 730 is configured to build a three-dimensional model corresponding to the surface of the object according to the depth-of-field data.
In the embodiment of the invention, based on the preview image acquired by the terminal camera, N times of focusing operation can be performed on the surface of the object corresponding to the preview image so as to obtain the depth of field data of M sub-areas of the preview image, and a three-dimensional model of the surface of the object can be established based on the depth of field data; therefore, the acquisition of the depth of field data of the preview image can be completed only through the terminal camera, and the hardware cost is reduced; the focus is carried out on the surface of the object corresponding to the preview image to acquire the depth of field data, so that the influence of external light on the acquisition of the depth of field data can be avoided, and the acquisition precision of the depth of field data is improved.
Optionally, as an embodiment, the executing module 710 includes:
the control unit is used for controlling a motor in the camera to move from a preset first position to a preset second position;
and the focusing unit is used for performing N times of focusing operation on the surface of the object in the moving process.
Optionally, as an embodiment, the obtaining module 720 includes:
the first acquisition unit is used for acquiring a first displacement of a motor in the camera when the surface of an object corresponding to at least one of the M sub-areas of the preview image is a focusing area in the moving process;
and the first calculation unit is used for calculating the depth data of at least one sub-area in the M sub-areas of the preview image according to the first displacement.
Optionally, as an embodiment, the executing module 710 includes:
the determining unit is used for determining at least one sub-area in the M sub-areas as a target sub-area according to a preset sequence;
and the operation unit is used for performing focusing operation on the object surface corresponding to the target sub-area.
Optionally, as an embodiment, the obtaining module 720 includes:
the second acquisition unit is used for acquiring a second displacement of the motor in the camera when the object surface corresponding to the target sub-area is a focusing area in the process of performing focusing operation on the object surface corresponding to the target sub-area;
and the second calculating unit is used for calculating the depth of field data of the target sub-area according to the second displacement.
The terminal device provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 6, and is not described here again to avoid repetition.
Fig. 8 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, a processor 810, and a power supply 811. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 8 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 810 is configured to:
carrying out focusing operation for N times on the surface of an object corresponding to a preview image acquired by a camera;
acquiring depth-of-field data of M sub-regions of the preview image in the N focusing operation processes;
and constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data.
In the embodiment of the invention, based on the preview image acquired by the terminal camera, N times of focusing operation can be executed on the surface of the object corresponding to the preview image so as to acquire the depth of field data of M sub-areas of the preview image, and a three-dimensional model of the surface of the object can be established based on the depth of field data; therefore, the acquisition of the depth of field data of the preview image can be completed only through the terminal camera, and the hardware cost is reduced; the focus is carried out on the surface of the object corresponding to the preview image to acquire the depth of field data, so that the influence of external light on the acquisition of the depth of field data can be avoided, and the acquisition precision of the depth of field data is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 801 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 810; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 801 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 802, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 803 may convert audio data received by the radio frequency unit 801 or the network module 802 or stored in the memory 809 into an audio signal and output as sound. Also, the audio output unit 803 may also provide audio output related to a specific function performed by the terminal apparatus 800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 804 is used for receiving an audio or video signal. The input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics processor 8041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphics processor 8041 may be stored in the memory 809 (or other storage medium) or transmitted via the radio unit 801 or the network module 802. The microphone 8042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 801 in case of a phone call mode.
The terminal device 800 also includes at least one sensor 805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 8061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 8061 and/or the backlight when the terminal device 800 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 806 is used to display information input by the user or information provided to the user. The Display unit 806 may include a Display panel 1061, and the Display panel 8061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 807 is operable to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 807 includes a touch panel 8071 and other input devices 8072. The touch panel 8071, also referred to as a touch screen, can collect touch operations by a user on or near the touch panel 8071 (e.g., operations by a user on or near the touch panel 8071 using a finger, a stylus, or any other suitable object or accessory). The touch panel 8071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 810, receives a command from the processor 810, and executes the command. In addition, the touch panel 8071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 8071, the user input unit 807 can include other input devices 8072. Specifically, the other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 8071 can be overlaid on the display panel 8061, and when the touch panel 8071 detects a touch operation on or near the touch panel 8071, the touch operation can be transmitted to the processor 810 to determine a type of the touch event, and then the processor 810 can provide a corresponding visual output on the display panel 8061 according to the type of the touch event. Although in fig. 8, the touch panel 8071 and the display panel 8061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 8071 and the display panel 8061 may be integrated to implement the input and output functions of the terminal device, and this is not limited herein.
The interface unit 808 is an interface for connecting an external device to the terminal apparatus 800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 800 or may be used to transmit data between the terminal apparatus 800 and an external device.
The memory 809 may be used to store software programs as well as various data. The memory 809 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 809 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 810 is a control center of the terminal device, connects various parts of the whole terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 809 and calling data stored in the memory 809, thereby performing overall monitoring of the terminal device. Processor 810 may include one or more processing units; preferably, the processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
Terminal device 800 may also include a power supply 811 (such as a battery) for powering the various components, and preferably, power supply 811 may be logically coupled to processor 810 via a power management system to provide management of charging, discharging, and power consumption via the power management system.
In addition, the terminal device 800 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 810, a memory 809, and a computer program stored in the memory 809 and capable of running on the processor 810, where the computer program, when executed by the processor 810, implements each process of the above three-dimensional modeling method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the three-dimensional modeling method, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described herein again. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A three-dimensional modeling method is applied to a terminal with a camera, and is characterized by comprising the following steps:
carrying out focusing operation for N times on the surface of an object corresponding to a preview image acquired by a camera;
acquiring depth-of-field data of M sub-regions of the preview image in the N focusing operation processes;
constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data;
the method for executing N times of focusing operation on the surface of the object corresponding to the preview image acquired by the camera comprises the following steps:
controlling a motor in a camera to sequentially move to a focusing position from a preset first position until the motor moves to a preset second position, wherein the first position is the position where the displacement of the motor is zero, the second position is the position where the motor can move at the maximum displacement, and N-2 focusing positions are arranged between the first position and the second position at equal intervals according to the value of N;
during the moving process, each position carries out a focusing operation, and each position comprises the first position, the focusing position and the second position.
2. The method according to claim 1, wherein the acquiring depth of field data of M sub-regions of the preview image during the N focusing operations comprises:
in the moving process, when the surface of an object corresponding to at least one sub-area in the M sub-areas of the preview image is a focusing area, acquiring a first displacement of a motor in the camera;
and calculating the depth data of at least one sub-area in the M sub-areas of the preview image according to the first displacement.
3. The method according to claim 1, wherein the performing N times of focusing operations on the surface of the object corresponding to the preview image acquired by the camera includes:
determining at least one sub-region in the M sub-regions as a target sub-region according to a preset sequence;
and carrying out focusing operation on the surface of the object corresponding to the target sub-area.
4. The method according to claim 3, wherein the acquiring depth of field data of M sub-regions of the preview image during the N times of focusing operations comprises:
in the process of performing focusing operation on the object surface corresponding to the target sub-area, acquiring a second displacement of a motor in the camera when the object surface corresponding to the target sub-area is a focusing area;
and calculating the depth of field data of the target sub-area according to the second displacement.
5. A terminal device, characterized in that the terminal device comprises:
the execution module is used for executing N times of focusing operation on the surface of an object corresponding to the preview image acquired by the camera;
the acquisition module is used for acquiring the depth of field data of the M sub-areas of the preview image in the process of the N times of focusing operation;
the construction module is used for constructing a three-dimensional model corresponding to the surface of the object according to the depth-of-field data;
the execution module comprises:
the control unit is used for controlling a motor in the camera to sequentially move to a focusing position from a preset first position until the motor moves to a preset second position, the first position is a position where the displacement of the motor is zero, the second position is a position where the motor can move at the maximum displacement, and N-2 focusing positions are arranged between the first position and the second position at equal intervals according to the value of N;
and the focusing unit is used for carrying out focusing operation once at each position in the moving process, and each position comprises the first position, the focusing position and the second position.
6. The terminal device of claim 5, wherein the obtaining module comprises:
a first obtaining unit, configured to obtain, in the moving process, a first displacement amount of a motor in the camera when an object surface corresponding to at least one of the M sub-areas of the preview image is a focusing area;
and the first calculation unit is used for calculating the depth data of at least one sub-area in the M sub-areas of the preview image according to the first displacement.
7. The terminal device of claim 5, wherein the execution module comprises:
the determining unit is used for determining at least one sub-area in the M sub-areas as a target sub-area according to a preset sequence;
and the operation unit is used for performing focusing operation on the surface of the object corresponding to the target sub-area.
8. The terminal device of claim 7, wherein the obtaining module comprises:
the second acquisition unit is used for acquiring a second displacement of the motor in the camera when the object surface corresponding to the target sub-area is a focusing area in the process of performing focusing operation on the object surface corresponding to the target sub-area;
and the second calculating unit is used for calculating the depth-of-field data of the target sub-area according to the second displacement.
9. A terminal device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201810214667.XA 2018-03-15 2018-03-15 Three-dimensional modeling method and terminal Active CN108550182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810214667.XA CN108550182B (en) 2018-03-15 2018-03-15 Three-dimensional modeling method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810214667.XA CN108550182B (en) 2018-03-15 2018-03-15 Three-dimensional modeling method and terminal

Publications (2)

Publication Number Publication Date
CN108550182A CN108550182A (en) 2018-09-18
CN108550182B true CN108550182B (en) 2022-10-18

Family

ID=63516402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810214667.XA Active CN108550182B (en) 2018-03-15 2018-03-15 Three-dimensional modeling method and terminal

Country Status (1)

Country Link
CN (1) CN108550182B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109769091B (en) * 2019-02-22 2020-09-18 维沃移动通信有限公司 Image shooting method and mobile terminal
CN112529770B (en) * 2020-12-07 2024-01-26 维沃移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004125708A (en) * 2002-10-04 2004-04-22 Olympus Corp Apparatus and method for measuring three dimensional shape
JP2007172393A (en) * 2005-12-22 2007-07-05 Keyence Corp Three-dimensional image display device, operation method of three-dimensional image display device, three-dimensional image display program, computer readable recording medium and storage device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000200131A (en) * 1999-01-05 2000-07-18 Canon Inc Three-dimensional image generation system and three- dimensional image generation method
DE602005009432D1 (en) * 2004-06-17 2008-10-16 Cadent Ltd Method and apparatus for color forming a three-dimensional structure
US9769455B2 (en) * 2010-12-21 2017-09-19 3Shape A/S 3D focus scanner with two cameras
WO2014121108A1 (en) * 2013-01-31 2014-08-07 Threevolution Llc Methods for converting two-dimensional images into three-dimensional images
CN104660900B (en) * 2013-10-30 2018-03-02 株式会社摩如富 Image processing apparatus and image processing method
CN106254855B (en) * 2016-08-25 2017-12-05 锐马(福建)电气制造有限公司 A kind of three-dimensional modeling method and system based on zoom ranging
CN106651870B (en) * 2016-11-17 2020-03-24 山东大学 Segmentation method of image out-of-focus fuzzy region in multi-view three-dimensional reconstruction
CN106973227A (en) * 2017-03-31 2017-07-21 努比亚技术有限公司 Intelligent photographing method and device based on dual camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004125708A (en) * 2002-10-04 2004-04-22 Olympus Corp Apparatus and method for measuring three dimensional shape
JP2007172393A (en) * 2005-12-22 2007-07-05 Keyence Corp Three-dimensional image display device, operation method of three-dimensional image display device, three-dimensional image display program, computer readable recording medium and storage device

Also Published As

Publication number Publication date
CN108550182A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN107592466B (en) Photographing method and mobile terminal
CN110557575B (en) Method for eliminating glare and electronic equipment
CN111182205B (en) Photographing method, electronic device, and medium
CN109348020B (en) Photographing method and mobile terminal
CN109151442B (en) Image shooting method and terminal
CN108989678B (en) Image processing method and mobile terminal
CN110300267B (en) Photographing method and terminal equipment
CN107749046B (en) Image processing method and mobile terminal
CN108763998B (en) Bar code identification method and terminal equipment
CN107730460B (en) Image processing method and mobile terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN109544445B (en) Image processing method and device and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN110519503B (en) Method for acquiring scanned image and mobile terminal
CN111405181B (en) Focusing method and electronic equipment
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN110944114B (en) Photographing method and electronic equipment
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN108550182B (en) Three-dimensional modeling method and terminal
CN110881105B (en) Shooting method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment
CN108600623B (en) Refocusing display method and terminal device
CN109104573B (en) Method for determining focusing point and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant