CN112348965A - Imaging method, imaging device, electronic equipment and readable storage medium - Google Patents

Imaging method, imaging device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112348965A
CN112348965A CN202011167192.7A CN202011167192A CN112348965A CN 112348965 A CN112348965 A CN 112348965A CN 202011167192 A CN202011167192 A CN 202011167192A CN 112348965 A CN112348965 A CN 112348965A
Authority
CN
China
Prior art keywords
target object
key feature
contour
feature points
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011167192.7A
Other languages
Chinese (zh)
Inventor
冀文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011167192.7A priority Critical patent/CN112348965A/en
Publication of CN112348965A publication Critical patent/CN112348965A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an imaging method, an imaging device, electronic equipment and a readable storage medium, belongs to the technical field of communication, and can solve the problems of large power consumption and long modeling time of AR or VR equipment. The method comprises the following steps: acquiring spatial position information of key feature points of a target object; the key feature points include any one of: intersection points of contour curves of the target object, or intersection points of N material regions in the surface of the target object; constructing a spatial structure model of the target object according to the spatial position information of the key feature points; filling a space structure model according to material information corresponding to each material area in the surface of the target object to generate a three-dimensional model of the target object; wherein the material region in the surface of the target object is divided based on the contour curve of the target object. The embodiment of the application is applied to a scene of immersed experience by using AR or VR equipment.

Description

Imaging method, imaging device, electronic equipment and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an imaging method, an imaging device, electronic equipment and a readable storage medium.
Background
With the development of Augmented Reality (AR) and Virtual Reality (VR) technologies, scenes in which users use virtual devices (e.g., AR devices or VR devices) for immersive experiences are becoming more abundant. The virtual device can display the image data of the target object acquired by the camera in a virtual scene after three-dimensional modeling processing.
In the related art, the virtual device performs three-dimensional modeling by acquiring the pixel characteristics of the surface of the target object, however, since the imaging process needs to calculate the pixel characteristics of each acquired pixel, the imaging method needs the virtual device to have large computing power and graphic rendering capability, which results in large power consumption of the virtual device and long modeling time.
Disclosure of Invention
An object of the embodiments of the present application is to provide an imaging method, an imaging apparatus, an electronic device, and a readable storage medium, which can solve the problems of large power consumption and long modeling time of an AR or VR device.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an imaging method, including: acquiring spatial position information of key feature points of a target object; the key feature points include any one of: intersection points of contour curves of the target object, or intersection points of N material regions in the surface of the target object; constructing a spatial structure model of the target object according to the spatial position information of the key feature points; filling a space structure model according to material information corresponding to each material area in the surface of the target object to generate a three-dimensional model of the target object; the material regions in the surface of the target object are divided based on the contour curve of the target object, and the N material regions are different in material.
In a second aspect, an embodiment of the present application further provides an imaging apparatus, including: the device comprises an acquisition module, a construction module and a generation module; the acquisition module is used for acquiring the spatial position information of the key characteristic points of the target object; the key feature points include any one of: intersection points of contour curves of the target object, or intersection points of N material regions in the surface of the target object; the construction module is used for constructing a spatial structure model of the target object according to the spatial position information of the key feature points acquired by the acquisition module; the generating module is used for filling the spatial structure model constructed by the constructing module according to the material information corresponding to each material area in the surface of the target object to generate a three-dimensional model of the target object; the material regions in the surface of the target object are divided based on the contour curve of the target object, and the N material regions are different in material.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the imaging method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, spatial position information of key feature points of a target object is obtained through intersections of contour curves of the target object or intersections of N adjacent material regions with different materials in the surface of the target object, a spatial structure model of the target object is constructed according to the spatial position information of the key feature points, and then the spatial structure model is filled according to material information corresponding to each material region in the surface of the target object, so as to generate a three-dimensional model of the target object. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling efficiency is greatly improved.
Drawings
FIG. 1 is a schematic diagram of an imaging method provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating an imaging method according to an embodiment of the present disclosure;
FIG. 3 is a second schematic diagram of an imaging method according to an embodiment of the present application;
fig. 4 is a third schematic diagram of an imaging method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an imaging device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The imaging method provided by the embodiment of the application can be applied to a scene for carrying out invasion type experience by using AR or VR equipment.
For example, in a scene in which a target object is added to a virtual frame through three-dimensional modeling, in the related art, as shown in fig. 1, an electronic device may acquire an image of the target object (e.g., a stereo image shown in (a) in fig. 1) through a camera, and generate a three-dimensional stereo model of the target object according to pixel information of each pixel point of the target object in the image (e.g., a dotted line of the stereo image shown in (B) in fig. 1, where only contour pixel points are displayed, actually all pixel points on a surface of the stereo image need to be acquired). Because the above method does not screen the pixel points, the electronic device usually needs to calculate the pixel information of each pixel point, including: the color of each pixel point, the spatial position information of each pixel point, and the like. As a result, a huge amount of data calculation is generated, and for an electronic device with poor calculation capability, the modeling time is long, which results in a delay in generating a model in a virtual screen. Therefore, the method cannot be applied to most electronic devices, and with the popularization of AR playing, an imaging method with a small calculation amount is required for electronic devices with poor calculation capability to adapt to most electronic devices.
For this problem, in the technical solution provided in the embodiment of the present application, for a problem that the obtained pixel points are not screened in the related art, which results in a large calculation amount, a person skilled in the art can think that the data calculation amount of the electronic device is reduced by screening the pixel points. However, how to filter pixel points, that is, how to extract key points from the acquired massive pixel points, can reduce the data calculation amount, and cannot significantly reduce the imaging quality, which is not easy to think. According to the technical scheme, through observation of the house structure and the human skeleton, the human body is found to have structural changes only in joints in the motion process, and human tissues between the joints can not change along with the movement of the human body. Therefore, joints of a human body can be regarded as key points, a human body skeleton model can be constructed through connecting lines between the key points, and the human body skeleton model is filled with muscle tissues, so that a complete human body model can be obtained.
Therefore, according to the technical scheme in the application, first, contour information of a target object is obtained, a cross point of a contour curve of the target object or a cross point of N adjacent material areas with different materials in the surface of the target object is determined according to the contour information, spatial position information of key feature points of the target object is obtained, a spatial structure model of the target object is constructed according to the spatial position information of the key points, and then the spatial structure model is filled according to material information corresponding to each material area in the surface of the target object, so that a three-dimensional model of the target object is generated. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling time is reduced, and the modeling efficiency is greatly improved.
The imaging method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
As shown in fig. 2, an imaging method provided in an embodiment of the present application may include the following steps 201 to 203:
step 201, the imaging device obtains spatial position information of key feature points of the target object.
Wherein the key feature points include any one of: the intersection point of the contour curve of the target object, or the intersection point of N material areas in the surface of the target object, wherein the N material areas are different in material, and N is more than or equal to 2. It is understood that the N material regions are adjacent material regions.
Step 202, the imaging device constructs a spatial structure model of the target object according to the spatial position information of the key feature points.
Illustratively, the target object may be a human being, an animal or any other object having a three-dimensional spatial structure. For example, a plurality of blocks as shown in fig. 1. The key feature point of the target object may be an intersection of contour curves of the target object, or an intersection of N adjacent material regions with different materials on the surface of the target object. The spatial position information of the key feature point may be understood as position information of the key feature point in a three-dimensional space, and is expressed by (x, y, z) in a spatial coordinate system constructed by an x-axis, a y-axis, and a z-axis.
For example, the key feature point may be an intersection point of at least two straight lines in the contour of the target object, and therefore, the two key feature points may be connected by a straight line to construct a spatial structure model of the target object, however, in general, since the surface of the object has no absolute straight line, the straight line may be represented by a curve having a curvature smaller than a preset curvature. Therefore, the intersection of the contour curve of the target object may be understood as a point at which the contour curve of the target object turns.
For example, referring to fig. 1, as shown in fig. 3, the electronic device determines a key feature point through an intersection point of a contour curve of the target object, as shown in (a) of fig. 3, the electronic device uses an intersection point of a boundary of the cube as the key feature point (as shown by a black dot in fig. 3), at this time, the electronic device only needs to calculate spatial position information of each key feature point, so that a spatial structure model of the cube can be constructed, other pixel points on each edge of the cube do not need to be calculated, and data calculation amount is greatly reduced.
For example, the electronic device may acquire material information of each material region on the surface of the target object through the camera, and when the electronic device generates the three-dimensional stereo model of the target object, the electronic device needs to fill the material to the corresponding spatial position, so that after the electronic device acquires the material information of each material region on the surface of the target object, the electronic device needs to acquire position information of a position where the material information of each material region needs to be filled, that is, position information of a region corresponding to the material region in the three-dimensional stereo model. It can be understood that one surface can be determined by the three points, and therefore, after the material information of each material region on the surface of the target object is obtained, the electronic device can fill the material of the material region to the correct position when the three-dimensional stereo model of the target object is generated only by obtaining the position information of three different position points on the material region.
For example, referring to fig. 1 and as shown in fig. 4, the electronic device determines key feature points through intersections of adjacent material regions in the surface of the target object, where as shown in fig. 4 (a), the material region 41 and the material region 42 are two adjacent material regions, three key feature points (point a, point b, and point c) are determined through the two adjacent material regions, and by determining spatial position information of the three key feature points, position information of the material region 41 in a corresponding region of the three-dimensional solid model can be determined when the three-dimensional solid model of the cube is generated.
The material information may be understood as: a combination of visual characteristic properties of the surface of the object including texture, color, smoothness, transparency, refractive index, etc. of the surface of the object.
For example, the key feature points are points in a three-dimensional space, and therefore, the electronic device needs to acquire three-dimensional space position information of each key feature point. The electronic device can acquire the key feature points of the target object in at least two ways.
Mode 1:
in the mode 1, the electronic device may determine the spatial position information of the key feature point through the structured light. Structured light is a set of system structures consisting of a projector (a light source that emits invisible light) and a camera. The projector is used for projecting specific light information to the surface of an object and the background, and the specific light information is collected by the camera. Information such as the position and depth of the object is calculated from the change of the optical signal caused by the object, and the entire three-dimensional space is restored. Thereby, the spatial position information of the key feature point can be obtained.
Mode 2:
in mode 2, the electronic device may determine the spatial position information of the key feature point through a time of flight (tof) sensor. The principle is as follows: the tof sensor emits modulated near infrared light which is reflected after meeting an object, and the tof sensor converts the distance of a shot scene by calculating the time difference or phase difference between light emission and reflection so as to generate depth information. The electronic device may determine spatial location information of the key feature points from the depth information.
Step 203, the imaging device fills the spatial structure model according to the material information corresponding to each material area in the surface of the target object, and generates a three-dimensional model of the target object.
Wherein the material region in the surface of the target object is divided based on a contour curve of the target object.
Illustratively, after the electronic device constructs a spatial structure model of the target object according to the spatial position information of the key feature points, the electronic device generates a three-dimensional model of the target object by filling materials into the spatial structure model.
For example, as shown in fig. 1, the material of the material region 11 shown in fig. 1 (a) is black, and as shown in fig. 3 (B) in conjunction with fig. 1, after the electronic device constructs a cubic spatial structure model, the material corresponding to the material region 11 is filled into the region 31 of the cubic spatial structure model. Alternatively, after the electronic device constructs the spatial structure model of the cube, the region 41 of the spatial structure model of the cube is filled with the material corresponding to the material region 11.
In this way, the spatial position information of the key feature points of the target object is obtained through the intersection points of the contour curve of the target object or the intersection points of N adjacent material areas with different materials on the surface of the target object, a spatial structure model of the target object is constructed according to the spatial position information of the key feature points, and then the spatial structure model is filled according to the material information corresponding to each material area on the surface of the target object, so as to generate the three-dimensional model of the target object. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling efficiency is greatly improved.
Optionally, in this embodiment of the application, before obtaining the key feature points of the target object, the electronic device needs to obtain a contour of the target object, and in order to obtain contour information of the target object more clearly and accurately, the electronic device may acquire images of the target object in different luminance scenes.
Illustratively, the step 201 may include the following steps 201a1 and 201a 2:
step 201a1, the imaging device acquires contour information of the target object, the contour information including: and the electronic equipment acquires the contour information of the target object under different brightness scenes.
Step 201a2, the imaging device determines the spatial position information of the key feature point of the target object according to the contour information.
Illustratively, the contour information is used to represent an appearance contour of the target object, i.e., a cubic contour as partially shown by a dotted line in fig. 1 (B). After the electronic equipment acquires the contour information of the target object, the contour curve intersection points indicated by the contour information are used as key feature points of the target object.
Therefore, the electronic equipment can obtain more accurate contour information of the target object by comparing the images of the target object collected under different brightness scenes.
Alternatively, in the embodiment of the present application, in the case where the key feature point is an intersection of contour curves of the target object, the electronic device may determine the key feature point by the following method.
For example, in the case that the above-mentioned key feature point is an intersection point of the contour curve of the target object, before the above-mentioned step 201, the imaging method provided by the embodiment of the present application further includes the following steps 201b1 and 201b 2:
step 201b1, the imaging device acquires the contour curve of the target object.
Step 201b2, the imaging device takes the intersection point of the target contour curve as the key feature point of the target object.
Wherein, the curvature of the contour curve of the target object is less than or equal to the preset curvature; the target contour curve is a contour curve intersecting with other contour curves in the contour curves of the target object.
For example, the intersection point may be an intersection point of at least two contour curves, and the electronic device constructs a spatial structure model of the target object according to the key feature point and the contour curves.
Therefore, the electronic equipment can be used as the target characteristic point through the focus of the contour curve of the target object, so that the screening of the mass pixel points is realized, and the purpose of reducing the calculated amount is achieved.
Optionally, in this embodiment of the application, in a case that the above-mentioned key feature point is an intersection point of adjacent material regions in the surface of the target object, the key feature point in each material region may be extracted by the following method, and then a position where a material corresponding to the material region is filled in the three-dimensional stereo model may be determined.
For example, in a case where the key feature point is an intersection of N adjacent material regions with different materials in the surface of the target object, the surface of the target object includes at least two material regions, and each material region includes at least three key feature points.
Illustratively, the step 201 may further include the following step 201 c:
step 201c, the imaging device takes any feature point in the first material region as a first key feature point, and takes two feature points on the intersection line of the first material region and the second material region as a second key feature point.
Wherein the first material region is any one of at least two material regions; the second material region is a material region adjacent to the first material region.
For example, since one surface can be determined by three points, three different points in the first material region can be obtained as key feature points, and then the spatial position information of the material region can be determined.
For example, referring to fig. 1, after acquiring the material information of the material corresponding to the material region 11, the electronic device needs to acquire the spatial position information of the material region 11 in order to fill the material into the spatial structure model shown in fig. 4 (B), and in this case, it only needs to acquire the spatial position information of three points (for example, a point a, a point B, and a point c shown in fig. 4 (a)) in the material region 11, so as to determine the spatial position information of the region 41 in the spatial structure model that the material region 11 needs to be filled.
In this way, the electronic device may construct a spatial structure model of the target object through the three key feature points determined from each material region, and determine a position where the material corresponding to each material region is filled in the three-dimensional model when the three-dimensional model of the target object is generated.
Optionally, in this embodiment of the present application, when the material of the surface of the target object is filled into the spatial structure model of the target object, the material in each material region needs to be accurately filled into the corresponding region of the spatial structure model.
Illustratively, the surface of the target object includes at least two material regions, and the spatial structure model includes at least one region for filling the material.
Illustratively, the step 203 may include the following steps 203 a:
and 203a, filling the target area in the space structure model by the imaging device according to the material information corresponding to the third material area, and generating a three-dimensional model of the target object.
The third material area is any one of at least two material areas, and the target area is an area corresponding to the third material area in at least one area for filling materials.
It should be noted that, after the electronic device constructs the spatial structure model of the target object, the electronic device fills the material corresponding to each material region of the target object into the spatial structure model, and the correspondence between the material region and the corresponding region in the spatial structure model has been stated in detail in the above method, and is not described here again in order to prevent repetition.
Therefore, the electronic device can correctly fill the material of each material area on the surface of the target object into the space structure model according to the corresponding relation between the third material area and the target area in the space structure model.
The imaging method provided by the embodiment of the application includes the steps of firstly obtaining contour information of a target object, determining intersections of contour curves of the target object according to the contour information, or dividing material regions of the target object according to the contour information, determining intersections of N adjacent material regions with different materials in the surface of the target object, obtaining spatial position information of key feature points of the target object, constructing a spatial structure model of the target object according to the spatial position information of the key points, and then filling the spatial structure model according to material information corresponding to each material region in the surface of the target object to generate a three-dimensional model of the target object. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling time is reduced, and the modeling efficiency is greatly improved.
It should be noted that, in the imaging method provided in the embodiment of the present application, the execution subject may be an imaging apparatus, or a control module for executing the imaging method in the imaging apparatus. The imaging device provided by the embodiment of the present application is described by taking an imaging device as an example to execute an imaging method.
In the embodiments of the present application, the above-described methods are illustrated in the drawings. The imaging method is exemplified by referring to a drawing in the embodiment of the present application. In specific implementation, the imaging methods shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
Fig. 5 is a schematic structural diagram of an imaging apparatus capable of implementing the embodiment of the present application, and as shown in fig. 5, the imaging apparatus 600 includes: an acquisition module 601, a construction module 602 and a generation module 603; the acquiring module 601 is configured to acquire spatial position information of a key feature point of a target object; the key feature points include any one of: intersection points of contour curves of the target object, or intersection points of N material regions in the surface of the target object; a building module 602, configured to build a spatial structure model of the target object according to the spatial location information of the key feature points obtained by the obtaining module 601; a generating module 603, configured to fill the spatial structure model constructed by the constructing module 602 according to material information corresponding to each material region in the surface of the target object, and generate a three-dimensional model of the target object; the material regions in the surface of the target object are divided based on the contour curve of the target object, and the N material regions are different in material.
Optionally, as shown in fig. 5, the electronic device 600 further includes: the determining module 604 and the obtaining module 601 are specifically configured to obtain contour information of the target object, where the contour information includes: contour information of a target object collected under different brightness scenes; the determining module 604 is configured to determine spatial position information of the key feature point of the target object according to the contour information acquired by the acquiring module 601.
Optionally, as shown in fig. 5, in a case where the key feature point is an intersection of contour curves of the target object, the electronic device 600 further includes: an acquisition module 605; an acquisition module 605, configured to acquire a contour curve of the target object; a determining module 604, configured to use an intersection point of the target contour curve as a key feature point of the target object; the curvature of the contour curve of the target object is smaller than or equal to a preset curvature; the target contour curve is a contour curve that intersects other contour curves in the contour curves of the target object acquired by the acquisition module 605.
Optionally, in a case that the key feature points are intersections of N material regions in the surface of the target object, the surface of the target object includes at least two material regions, and each material region includes at least three key feature points; an obtaining module 601, configured to specifically use any feature point in the first material region as a first key feature point, and use two feature points on an intersection line of the first material region and the second material region as a second key feature point; the first material area is any one of the at least two material areas; the second material region is a material region adjacent to the first material region.
Optionally, the surface of the target object comprises at least two material regions; the spatial structure model comprises at least one region for filling materials; a generating module 603, configured to fill a target region in the spatial structure model according to the material information corresponding to the third material region, and generate a three-dimensional model of the target object; the third material area is any one of the at least two material areas; the target region is a region corresponding to the third material region in at least one region for filling the material.
It should be noted that, as shown in fig. 5, modules that are necessarily included in the imaging apparatus 600 are illustrated by solid line boxes, such as an acquisition module 601, a construction module 602, and a generation module 603; modules that may be included in the imaging apparatus 600 are illustrated with dashed boxes, such as a determination module 604 and an acquisition module 605.
The imaging device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The imaging apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The imaging device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 2 to fig. 4, and is not described herein again to avoid repetition.
The imaging device provided in the embodiment of the application first obtains contour information of a target object, determines intersections of contour curves of the target object according to the contour information, or divides material regions of the target object according to the contour information, determines intersections of N adjacent material regions with different materials in the surface of the target object, obtains spatial position information of key feature points of the target object, constructs a spatial structure model of the target object according to the spatial position information of the key points, and then fills the spatial structure model according to material information corresponding to each material region in the surface of the target object, thereby generating a three-dimensional model of the target object. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling time is reduced, and the modeling efficiency is greatly improved.
Optionally, as shown in fig. 6, an electronic device M00 is further provided in this embodiment of the present application, and includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and executable on the processor M01, where the program or the instruction when executed by the processor M01 implements each process of the above-described imaging method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The sensor 105 or the input unit 104 is configured to acquire spatial position information of a key feature point of the target object; the key feature points include any one of: intersection points of contour curves of the target object, or intersection points of N material regions in the surface of the target object; a processor 110, configured to construct a spatial structure model of the target object according to the spatial position information of the key feature points acquired by the sensor 105 or the input unit 104; a display unit 106, configured to fill the spatial structure model according to material information corresponding to each material region in the surface of the target object, and generate a three-dimensional model of the target object; the material regions in the surface of the target object are divided based on the contour curve of the target object, and the N material regions are different in material.
In this way, the spatial position information of the key feature points of the target object is obtained through the intersection points of the contour curve of the target object or the intersection points of N adjacent material areas with different materials on the surface of the target object, a spatial structure model of the target object is constructed according to the spatial position information of the key feature points, and then the spatial structure model is filled according to the material information corresponding to each material area on the surface of the target object, so as to generate the three-dimensional model of the target object. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling efficiency is greatly improved.
Optionally, the sensor 105 or the input unit 104 is specifically configured to acquire contour information of the target object, where the contour information includes: contour information of a target object collected under different brightness scenes; and the processor 110 is used for determining the spatial position information of the key feature points of the target object according to the contour information acquired by the sensor 105 or the input unit 104.
Therefore, the electronic equipment can obtain more accurate contour information of the target object by comparing the images of the target object collected under different brightness scenes.
Optionally, in a case that the key feature point is an intersection of contour curves of the target object, the sensor 105 or the input unit 104 is configured to acquire the contour curve of the target object; the processor 110 is configured to use an intersection point of the target contour curve as a key feature point of the target object; the curvature of the contour curve of the target object is smaller than or equal to a preset curvature; the target contour curve is a contour curve intersecting with other contour curves among the contour curves of the target object acquired by the sensor 105 or the input unit 104.
Therefore, the electronic equipment can be used as the target characteristic point through the focus of the contour curve of the target object, so that the screening of the mass pixel points is realized, and the purpose of reducing the calculated amount is achieved.
Optionally, in a case that the key feature points are intersections of N material regions in the surface of the target object, the surface of the target object includes at least two material regions, and each material region includes at least three key feature points; the processor 110 is specifically configured to use any feature point in the first material region as a first key feature point, and use two feature points on an intersection line of the first material region and the second material region as a second key feature point; the first material area is any one of the at least two material areas; the second material region is a material region adjacent to the first material region.
In this way, the electronic device may construct a spatial structure model of the target object through the three key feature points determined from each material region, and determine a position where the material corresponding to each material region is filled in the three-dimensional model when the three-dimensional model of the target object is generated.
Optionally, the surface of the target object comprises at least two material regions; the spatial structure model comprises at least one region for filling materials; the display unit 106 is specifically configured to fill the target region in the spatial structure model according to the material information corresponding to the third material region, and generate a three-dimensional model of the target object; the third material area is any one of the at least two material areas; the target region is a region corresponding to the third material region in at least one region for filling the material.
Therefore, the electronic device can correctly fill the material of each material area on the surface of the target object into the space structure model according to the corresponding relation between the third material area and the target area in the space structure model.
The electronic device provided by the embodiment of the application first obtains contour information of a target object, determines intersections of contour curves of the target object according to the contour information, or divides material regions of the target object according to the contour information, determines intersections of N adjacent material regions with different materials in the surface of the target object, obtains spatial position information of key feature points of the target object, constructs a spatial structure model of the target object according to the spatial position information of the key points, and then fills the spatial structure model according to material information corresponding to each material region in the surface of the target object, thereby generating a three-dimensional model of the target object. The electronic equipment can construct the spatial structure model of the target object only by acquiring the key feature points of the target object, and does not need to calculate the spatial position of each pixel point of the target object in the image acquired by the camera, so that the computation load of the electronic equipment is reduced. And then, the material information corresponding to the material of the surface of the target object is used for filling the space structure model, so that the modeling time is reduced, and the modeling efficiency is greatly improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned imaging method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above imaging method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of imaging, the method comprising:
acquiring spatial position information of key feature points of a target object; the key feature points include any one of: an intersection of a contour curve of the target object, or an intersection of N material regions in a surface of the target object;
constructing a spatial structure model of the target object according to the spatial position information of the key feature points;
filling the space structure model according to the material information corresponding to each material area in the surface of the target object to generate a three-dimensional model of the target object;
the material regions in the surface of the target object are divided based on the contour curve of the target object, and the N material regions are different in material.
2. The method according to claim 1, wherein the obtaining spatial position information of key feature points of the target object comprises:
acquiring contour information of the target object, wherein the contour information comprises: contour information of a target object collected under different brightness scenes;
and determining the spatial position information of the key feature points of the target object according to the contour information.
3. The method according to claim 1 or 2, wherein in the case that the key feature point is an intersection point of a contour curve of the target object, the method further comprises, before the obtaining spatial position information of the key feature point of the target object;
collecting a contour curve of a target object;
taking the intersection point of the target contour curve as a key characteristic point of the target object;
wherein the curvature of the contour curve of the target object is less than or equal to a preset curvature; the target contour curve is a contour curve which is intersected with other contour curves in the contour curves of the target object.
4. The method according to claim 1 or 2, wherein in case the key feature point is an intersection of N material regions in the surface of the target object, the surface of the target object comprises at least two material regions, each material region comprising at least three key feature points;
the method for acquiring the key feature points of the target object comprises the following steps:
taking any feature point in a first material area as a first key feature point, and taking two feature points on an intersection line of the first material area and a second material area as second key feature points;
the first material area is any one of the at least two material areas; the second material region is a material region adjacent to the first material region.
5. The method of claim 1, wherein the surface of the target object comprises at least two material regions; the spatial structure model comprises at least one region for filling materials;
filling the spatial structure model with material information corresponding to each material region in the surface of the target object, and generating a three-dimensional model of the target object, including:
filling a target area in the space structure model according to the material information corresponding to the third material area to generate a three-dimensional model of the target object;
wherein the third material region is any one of the at least two material regions; the target area is an area corresponding to the third material area in the at least one area for filling materials.
6. An imaging apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module, a construction module and a generation module;
the acquisition module is used for acquiring the spatial position information of the key feature points of the target object; the key feature points include any one of: an intersection of a contour curve of the target object, or an intersection of N material regions in a surface of the target object;
the building module is used for building a spatial structure model of the target object according to the spatial position information of the key feature points acquired by the acquiring module;
the generating module is used for filling the spatial structure model constructed by the constructing module according to the material information corresponding to each material area in the surface of the target object to generate a three-dimensional model of the target object;
the material regions in the surface of the target object are divided based on the contour curve of the target object, and the N material regions are different in material.
7. The apparatus of claim 6, further comprising: a determination module;
the obtaining module is specifically configured to obtain contour information of the target object, where the contour information includes: contour information of a target object collected under different brightness scenes;
and the determining module is used for determining the spatial position information of the key feature point of the target object according to the contour information acquired by the acquiring module.
8. The apparatus according to claim 6 or 7, wherein in case that the key feature point is an intersection point of a contour curve of the target object, the apparatus further comprises: the device comprises an acquisition module and a determination module;
the acquisition module is used for acquiring a contour curve of the target object;
the determining module is used for taking the intersection point of the target contour curve as a key feature point of the target object;
wherein the curvature of the contour curve of the target object is less than or equal to a preset curvature; the target contour curve is a contour curve which is intersected with other contour curves in the contour curves of the target object acquired by the acquisition module.
9. The apparatus according to claim 6 or 7, wherein in case the key feature point is an intersection of N material regions in the surface of the target object, the surface of the target object comprises at least two material regions, each material region comprising at least three key feature points;
the obtaining module is specifically configured to use any feature point in a first material region as a first key feature point, and use two feature points on an intersection line of the first material region and a second material region as a second key feature point;
the first material area is any one of the at least two material areas; the second material region is a material region adjacent to the first material region.
10. The apparatus of claim 6, wherein the surface of the target object comprises at least two material regions; the spatial structure model comprises at least one region for filling materials;
the generating module is specifically configured to fill a target region in the spatial structure model according to material information corresponding to a third material region, and generate a three-dimensional model of the target object;
wherein the third material region is any one of the at least two material regions; the target area is an area corresponding to the third material area in the at least one area for filling materials.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the imaging method of any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the imaging method according to any one of claims 1 to 5.
CN202011167192.7A 2020-10-27 2020-10-27 Imaging method, imaging device, electronic equipment and readable storage medium Pending CN112348965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011167192.7A CN112348965A (en) 2020-10-27 2020-10-27 Imaging method, imaging device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011167192.7A CN112348965A (en) 2020-10-27 2020-10-27 Imaging method, imaging device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112348965A true CN112348965A (en) 2021-02-09

Family

ID=74358759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011167192.7A Pending CN112348965A (en) 2020-10-27 2020-10-27 Imaging method, imaging device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112348965A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708412A (en) * 2022-06-06 2022-07-05 江西省映尚科技有限公司 Indoor setting method, device and system based on VR

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
WO2015199502A1 (en) * 2014-06-26 2015-12-30 한국과학기술원 Apparatus and method for providing augmented reality interaction service
CN108563859A (en) * 2018-04-10 2018-09-21 中国电子科技集团公司第二十八研究所 A method of quickly generating buildings model for the navigation of individual soldier's indoor positioning
US20190122376A1 (en) * 2017-10-20 2019-04-25 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method and device for image processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
WO2015199502A1 (en) * 2014-06-26 2015-12-30 한국과학기술원 Apparatus and method for providing augmented reality interaction service
US20190122376A1 (en) * 2017-10-20 2019-04-25 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method and device for image processing
CN108563859A (en) * 2018-04-10 2018-09-21 中国电子科技集团公司第二十八研究所 A method of quickly generating buildings model for the navigation of individual soldier's indoor positioning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708412A (en) * 2022-06-06 2022-07-05 江西省映尚科技有限公司 Indoor setting method, device and system based on VR

Similar Documents

Publication Publication Date Title
KR102524422B1 (en) Object modeling and movement method and device, and device
CN110147231B (en) Combined special effect generation method and device and storage medium
CN110276840B (en) Multi-virtual-role control method, device, equipment and storage medium
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
CN111815755A (en) Method and device for determining shielded area of virtual object and terminal equipment
CN110163942B (en) Image data processing method and device
CN109325450A (en) Image processing method, device, storage medium and electronic equipment
CN107223269A (en) Three-dimensional scene positioning method and device
KR20220083839A (en) A method and apparatus for displaying a virtual scene, and an apparatus and storage medium
US11989900B2 (en) Object recognition neural network for amodal center prediction
CN103914876A (en) Method and apparatus for displaying video on 3D map
CN112184873B (en) Fractal graph creation method, fractal graph creation device, electronic equipment and storage medium
JP2023171435A (en) Device and method for generating dynamic virtual content in mixed reality
US20170330384A1 (en) Product Image Processing Method, and Apparatus and System Thereof
CN114782646A (en) House model modeling method and device, electronic equipment and readable storage medium
CN115496845A (en) Image rendering method and device, electronic equipment and storage medium
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN112348965A (en) Imaging method, imaging device, electronic equipment and readable storage medium
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN117095150A (en) Visual field degree of freedom analysis method and device
CN115619986B (en) Scene roaming method, device, equipment and medium
CN116912387A (en) Texture map processing method and device, electronic equipment and storage medium
CN113313796B (en) Scene generation method, device, computer equipment and storage medium
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
CN114972466A (en) Image processing method, image processing device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination