CN111932448B - Data processing method, device, storage medium and equipment - Google Patents

Data processing method, device, storage medium and equipment Download PDF

Info

Publication number
CN111932448B
CN111932448B CN202010938064.1A CN202010938064A CN111932448B CN 111932448 B CN111932448 B CN 111932448B CN 202010938064 A CN202010938064 A CN 202010938064A CN 111932448 B CN111932448 B CN 111932448B
Authority
CN
China
Prior art keywords
dimensional model
model data
value
data
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010938064.1A
Other languages
Chinese (zh)
Other versions
CN111932448A (en
Inventor
朱能胜
张召世
汪阅冬
乐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Virtual Reality Institute Co Ltd
Original Assignee
Nanchang Virtual Reality Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Virtual Reality Institute Co Ltd filed Critical Nanchang Virtual Reality Institute Co Ltd
Priority to CN202010938064.1A priority Critical patent/CN111932448B/en
Publication of CN111932448A publication Critical patent/CN111932448A/en
Application granted granted Critical
Publication of CN111932448B publication Critical patent/CN111932448B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a data processing method, a data processing device, a storage medium and a data processing device, wherein the method comprises the steps of obtaining three-dimensional model data of a target object; converting the three-dimensional model data into two-dimensional model data; and when a modification instruction of a pair of two-dimensional model data is received, updating the two-dimensional model data according to the modification instruction, and updating and displaying the three-dimensional model data according to the updated two-dimensional model data. According to the data processing method, the data processing device, the storage medium and the equipment, the three-dimensional model data of the target object is obtained and converted into the two-dimensional model data, when a modification instruction of a pair of two-dimensional model data is received, the two-dimensional model data is updated according to the instruction, and the three-dimensional model data is updated and displayed according to the updated two-dimensional model data, so that the condition that an observation image is displayed on the object is realized, the modified content can be displayed on the object in real time, and the problem that the manufacturing cost of developing and manufacturing the projection content in the prior art is high, and the developing and manufacturing efficiency of the projection content is low is solved.

Description

Data processing method, device, storage medium and equipment
Technical Field
The present invention relates to the field of virtual reality technologies, and in particular, to a data processing method, an apparatus, a storage medium, and a device.
Background
With the rapid development of Virtual Reality (VR) technology, light carving Projection (also called stereoscopic light carving), which is a Projection technology, appears on the market, and can change a three-dimensional object into a display surface for image Projection. And projecting the matched image through the projector to perform augmented reality on the object.
In light carving projection, the object may be a large landscape such as a building or a building, a small outdoor object, or a drama setting. The special software is used to project two-dimensional or three-dimensional images in a program mode, and the images are adjusted by matching with objects, so that the images conform to the objects and the environment thereof.
In the prior art, the current process of making projection content is developed and made by taking a three-dimensional contour of a real object shot by a camera as a reference, a maker can judge whether the content meets the display requirement only by generating complete projection content and projecting the content onto the three-dimensional object, if the content does not meet the display requirement, the projection content needs to be modified, then the modified content is projected, and the steps are continuously repeated until the content meets the display requirement. The method for independently developing the projection content for multiple times greatly reduces the content development efficiency and increases the complexity of the projection content development.
Disclosure of Invention
Based on this, the present invention provides a data processing method, an apparatus, a storage medium, and a device, which are used to solve the problems of the prior art that the cost for developing and manufacturing the projection content is high and the efficiency for developing and manufacturing the projection content is low.
One aspect of the present invention provides a data processing method, including the steps of:
acquiring three-dimensional model data of a target object scanned by a scanner;
converting the three-dimensional model data into two-dimensional model data in a first preset format;
and when a modification instruction of a pair of two-dimensional model data is received, updating the two-dimensional model data according to the modification instruction, and updating and displaying the three-dimensional model data according to the updated two-dimensional model data.
In addition, the data processing method according to the above embodiment of the present invention may further have the following additional technical features:
further, the step of converting the three-dimensional model data into two-dimensional model data in a first preset format includes:
the method comprises the steps of establishing a three-dimensional coordinate system by taking the bottom center of a three-dimensional model as a coordinate origin and the axial direction of the three-dimensional model as the Y-axis direction, setting a starting line of the three-dimensional model, obtaining UV values corresponding to all characteristic points in three-dimensional model data, unfolding the three-dimensional model into a two-dimensional model, and determining the XY value of each characteristic point on the two-dimensional model according to the starting line.
Further, the step of updating and displaying the three-dimensional model data according to the updated two-dimensional model data includes:
acquiring XY values of all characteristic points in the updated two-dimensional model data;
determining a UV value corresponding to the XY value of each feature point according to the corresponding relation between the XY value and the UV value;
and updating the three-dimensional model data according to the UV value of each characteristic point.
Further, the step of expanding the three-dimensional model into a two-dimensional model comprises:
selecting a starting line on the outer peripheral surface of the three-dimensional model, and dividing the outer peripheral surface of the three-dimensional model into a plurality of rectangles by taking the starting line as a reference;
rotating the three-dimensional model to enable the rectangles to rotate to the front of the three-dimensional model one by one;
intercepting out rectangles which are positioned right ahead of the three-dimensional model at each time, and arranging and splicing the rectangles one by one according to the intercepting time sequence to form the two-dimensional model.
Further, the step of obtaining the UV value corresponding to each feature point in the three-dimensional model data further includes:
carrying out normalization processing on the UV values corresponding to the characteristic points to obtain normalized UV values corresponding to the characteristic points;
after the step of unfolding the three-dimensional model into a two-dimensional model and determining the XY value of each feature point on the two-dimensional model according to the start line, the method further comprises the following steps:
and carrying out normalization processing on the XY values corresponding to the characteristic points to obtain the normalized XY values corresponding to the characteristic points.
Further, the step of performing normalization processing on the UV values corresponding to the feature points to obtain normalized UV values corresponding to the feature points includes:
creating a rotating surface by passing through the origin of coordinates and the central axis of the three-dimensional model, and setting the rotating starting point of the rotating surface to be 0 degree;
a is any point on the three-dimensional model, and on the intersection line of the rotating surface and the three-dimensional model, the normalized coordinates of the point A are (U, V), U = theta/360 degrees, the value V is the axial height value of the point A divided by the axial height value of the three-dimensional model, and theta is the included angle between the projection of the intersection line of the point A and the rotating starting point of the rotating surface on the bottom surface;
the step of performing normalization processing on the XY values corresponding to the feature points to obtain the normalized XY values corresponding to the feature points comprises:
A1(X1,Y1) For a point on the two-dimensional model, (X, Y) are normalized coordinates, then X = X1/1920 ,Y= Y1The/1080, 1920 and 1080 are the resolution 1920 x 1080 of the two-dimensional model.
Further, before acquiring the UV value corresponding to each feature point in the three-dimensional model data, the method further includes:
and converting the three-dimensional model data into editable three-dimensional model data.
The invention also provides a data processing device, which is applied to data processing equipment, and the device comprises:
the data acquisition module is used for acquiring three-dimensional model data of a target object scanned by the scanner;
the data conversion module is used for converting the three-dimensional model data into two-dimensional model data in a first preset format;
and the data updating module is used for updating the two-dimensional model data according to the modification instruction when receiving the modification instruction of the pair of two-dimensional model data, and updating and displaying the three-dimensional model data according to the updated two-dimensional model data.
The present invention also provides a computer-readable storage medium on which a computer program is stored, which program, when executed by a processor, is adapted to carry out the above-mentioned data processing method.
The invention also provides a data processing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the data processing method as described above when executing the program.
According to the data processing method, the data processing device, the storage medium and the equipment, three-dimensional model data of a target object obtained by scanning of a scanner are obtained, the three-dimensional model data are converted into two-dimensional model data in a first preset format, when a modification instruction of a pair of two-dimensional model data is received, the two-dimensional model data are updated according to the modification instruction, and the three-dimensional model data are updated and displayed according to the updated two-dimensional model data, so that the condition that an image is displayed on the object is observed, projection content is created and modified immediately, the modified content can be displayed on the object immediately, the manufacturing cost of developing the projection content is reduced, the efficiency of developing and manufacturing the projection content is improved, and the problems that the manufacturing cost of developing the projection content is high and the efficiency of developing and manufacturing the projection content is low in the prior art.
Drawings
FIG. 1 is a general diagram of the method steps in a first embodiment of the present invention;
FIG. 2 shows a detailed step of step S102 in FIG. 1;
FIG. 3 shows a detailed step of step S1021 in FIG. 2;
FIG. 4 is a step complementary to step S101 of FIG. 1;
fig. 5 shows the detailed steps of step S1022 in fig. 2;
fig. 6 shows a specific step of step S103 in fig. 1;
FIG. 7 is a schematic view of an apparatus for practical use of the present invention;
FIG. 8 is a schematic diagram of three-dimensional modeling when coordinate values of a three-dimensional model are obtained;
FIG. 9 is a diagram of a process of converting a three-dimensional model into a two-dimensional model;
FIG. 10 is a block diagram of a data processing apparatus;
fig. 11 is a block diagram of a data processing apparatus.
Description of the main element symbols:
data acquisition module 11 Data conversion module 12
Data updating module 13 Processor with a memory having a plurality of memory cells 10
Memory device 20 Computer program 30
Scanner 100 Graphics processor 200
Display device 300
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Several embodiments of the invention are presented in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Example one
Referring to fig. 1, a data processing method according to a first embodiment of the present invention is shown, and the method includes steps S101 to S103.
Step S101, acquiring three-dimensional model data of a target object scanned by a scanner;
in the above steps, the point cloud data of the three-dimensional object obtained by scanning the object by the scanner is converted into three-dimensional model data of the object by inverse modeling software, which is generally in STL format.
Step S102, converting the three-dimensional model data into two-dimensional model data in a first preset format;
in the above steps, format conversion is performed on three-dimensional model data in an STL format through three-dimensional modeling software to convert the three-dimensional model data into an STP or OBJ format, the converted format can be read by image editable software, after the format conversion, the three-dimensional modeling software expands the three-dimensional model data to obtain a two-dimensional image, the three-dimensional model data is expanded into a two-dimensional image, each pixel point on the two-dimensional image corresponds to a three-dimensional object one to one, the image editable software is software capable of reading the three-dimensional model data and the two-dimensional model data, content development and creation can be performed on the two-dimensional image, and the content creation mode includes pictures, videos, special effects, colors, gray scales and the like. Specifically, the first preset format is a format recognizable by image editable software.
As a specific example, referring to fig. 2, step S102 specifically includes the following steps S1021 to S1022:
step S1021, acquiring a three-dimensional coordinate value corresponding to each feature point in the three-dimensional model data;
step S1022, the three-dimensional model is expanded into a two-dimensional model, and the two-dimensional coordinate value of each feature point on the two-dimensional model is determined according to the correspondence between the three-dimensional coordinate value and the two-dimensional coordinate value.
In practice, the following steps can be referred to:
establishing a three-dimensional coordinate system by taking the bottom center of the three-dimensional model as a coordinate origin and the axial direction of the three-dimensional model as the Y-axis direction;
setting a starting line of a three-dimensional model, and acquiring UV values corresponding to all characteristic points in the three-dimensional model data;
and unfolding the three-dimensional model into a two-dimensional model, and determining the XY value of each characteristic point on the two-dimensional model according to the starting line.
Referring to fig. 3, after the step of obtaining the three-dimensional coordinate values corresponding to the feature points in the three-dimensional model data, the method further includes step S10211:
step S10211, carrying out normalization processing on the three-dimensional coordinate value corresponding to the feature point to obtain a normalized three-dimensional coordinate value corresponding to the feature point;
further, after the step of determining the two-dimensional coordinate value of each feature point on the two-dimensional model according to the corresponding relationship between the three-dimensional coordinate value and the two-dimensional coordinate value, the method further includes step S1023, as follows:
step S1023, normalization processing is carried out on the two-dimensional coordinate values corresponding to the feature points, and normalized two-dimensional coordinate values corresponding to the feature points are obtained.
In the application, the three-dimensional coordinate value and the two-dimensional coordinate value are respectively subjected to normalization processing, so that the three-dimensional coordinate value and the two-dimensional coordinate value can be in one-to-one correspondence on the basis of establishing the relation between the three-dimensional model and the two-dimensional model.
The method specifically refers to the following steps:
the step of obtaining the UV value corresponding to each feature point in the three-dimensional model data further includes:
carrying out normalization processing on the UV values corresponding to the characteristic points to obtain normalized UV values corresponding to the characteristic points;
after the step of unfolding the three-dimensional model into a two-dimensional model and determining the XY value of each feature point on the two-dimensional model according to the start line, the method further comprises the following steps:
and carrying out normalization processing on the XY values corresponding to the characteristic points to obtain the normalized XY values corresponding to the characteristic points.
Further, the step of performing normalization processing on the UV values corresponding to the feature points to obtain normalized UV values corresponding to the feature points includes:
creating a rotating surface by passing through the origin of coordinates and the central axis of the three-dimensional model, and setting the rotating starting point of the rotating surface to be 0 degree;
a is any point on the three-dimensional model, the normalized coordinates of the point A on the intersection line of the rotating surface and the three-dimensional model are (U, V), U = theta/360 degrees, the value of V is the axial height value of the point A divided by the axial height value of the three-dimensional model, and theta is the included angle between the projection of the intersection line of the point A and the rotating starting point of the rotating surface on the bottom surface.
The step of performing normalization processing on the XY values corresponding to the feature points to obtain the normalized XY values corresponding to the feature points comprises:
A1(X1,Y1) For a point on the two-dimensional model, (X, Y) are normalized coordinates, then X = X1/1920 ,Y= Y1The/1080, 1920 and 1080 are the resolution 1920 x 1080 of the two-dimensional model.
Namely, the UV value is a coordinate value of three-dimensional model normalization, the XY value is a coordinate value of two-dimensional model normalization, and the UV value and the XY value are in one-to-one correspondence.
Because the three-dimensional model obtained by scanning the object by the scanner is composed of countless points, the three-dimensional model becomes surface type data after reverse modeling, the characteristic points are not lost but become surfaces by being mutually linked with other points, and the UV value of each characteristic point of the three-dimensional model is calculated by using the points. Specifically, a suitable three-dimensional coordinate system is established according to the three-dimensional model data of the object, and the UV value of each characteristic point of the three-dimensional model is obtained according to the three-dimensional coordinate system.
Referring to fig. 8, specifically, taking a vase as an example, a scanner scans the vase, and information of all points of the vase, such as height, width, length, etc., of the vase can be read in a three-dimensional modeling software, first, an initial line is determined on the vase, taking a transverse direction as an X axis, a longitudinal direction as a Y axis, a Z axis facing out of paper, an XYZ coordinate system is established at the center of the bottom of the vase, point a is any point on the vase, and a method for determining coordinates of point a (U, V) is as follows: two lines of the YOZ plane and the vase are intersected, the two lines face the outside of the paper and the inside of the paper respectively, a line close to the positive direction of the Z axis is assumed to be used as a starting line, the angle of the line is defined as 0 degree, the A point is located on a certain line of the vase, the YOZ plane is rotated for 360 degrees around the Y axis, the intersection line can be generated with the vase in each rotation, and if the projection of the intersection line where the A point is located on the XOZ plane and the included angle of the Z axis is theta, the U = theta/360 degrees of the coordinate of the A point, and the V value is the value of the height of the A point in the Y direction divided by the height of the Y direction.
It can be understood that, in the above steps, the vase consists of numerous intersecting lines, each of which is defined by an angle, and this angle value divided by 360 ° is the normalized coordinate U, and the V value is the Y-direction height of the upper point of each line divided by the Y-direction height of the vase; the coordinate value of each feature point on the vase is defined as (U, V), which is a normalized coordinate. And (U, V) coordinates are normalized, so that the (U, V) coordinates can correspond to coordinate values (X, Y) in the two-dimensional model one by one, and the (X, Y) coordinates and the (U, V) coordinates are associated, so that the (U, V) coordinates can be correspondingly reflected on the three-dimensional model when the two-dimensional model is modified.
Referring to fig. 5, as a specific example, the step of expanding the three-dimensional model into the two-dimensional model includes steps S10221 to S10223 as follows:
a step S10221 of dividing an outer peripheral surface of the three-dimensional model into a plurality of rectangles;
in the above step, the step of dividing the outer peripheral surface of the three-dimensional model into a plurality of rectangles is specifically to select a start line on the outer peripheral surface of the three-dimensional model, and divide the outer peripheral surface of the three-dimensional model into a plurality of rectangles with the start line as a reference.
A step S10222 of rotating the three-dimensional model such that the rectangles are rotated one by one to a position directly in front of the three-dimensional model, for example, a Z-axis direction is defined as a position directly in front;
step S10223, intercepting the rectangles which are positioned right in front of the three-dimensional model each time, and arranging and splicing the rectangles one by one according to the intercepting time sequence to form the two-dimensional model.
Referring to fig. 8 and 9, specifically, in connection with the vase example, the three-dimensional model is unfolded into a two-dimensional model, the start line of the vase is rotated to the front, the whole vase is divided into numerous rectangles from the start line (the rectangles are shown by dotted lines, the widths of the rectangles can be freely set, the smaller the width is, the higher the precision is, the larger the amount of processed data is), only one rectangle at the front is taken each time, the vase is rotated, all the rectangles are taken out, and finally the two-dimensional model is spliced.
It can be understood that the vase can be obtained according to the mathematical principle after being divided infinitely, namely, an infinite number of rectangles can be obtained, so that the three-dimensional model is converted into the two-dimensional model associated with the three-dimensional model, and the modification on the two-dimensional model can be reflected in the three-dimensional model.
Specifically, in the two-dimensional model after the three-dimensional model is unfolded, the size of the two-dimensional model can be set to have a resolution of 1920X 1080, the start line of the two-dimensional model and the start line of (U, V) coordinates are the same, then the coordinate values of all the points of the two-dimensional model are normalized by dividing the coordinates of all the points on the two-dimensional model by 1920 in the X direction and by 1080 in the Y direction, and finally the coordinate values (X, Y) of all the points are obtained, wherein the coordinates (U, V) and the coordinates (X, Y) are in one-to-one correspondence, so that the pixel points of the three-dimensional model and the two-dimensional model are in one-to-one correspondence.
Step S103, when a modification instruction of a pair of two-dimensional model data is received, updating the two-dimensional model data according to the modification instruction, and updating and displaying the three-dimensional model data according to the updated two-dimensional model data.
In the above steps, for example, color change needs to be performed on the "blooming expensive" typeface on the three-dimensional model data, and if the display color of the "blooming expensive" typeface after projection is red, it needs to be changed to blue display, at this time, according to a modification instruction for changing the red display to blue display, only the "blooming expensive" typeface needs to be updated to blue display on the two-dimensional model data, and the "blooming expensive" typeface in the three-dimensional model data can be immediately updated to blue display, so that the operation is convenient and efficient.
Referring to fig. 6, in a specific implementation, the step of updating and displaying the three-dimensional model data according to the updated two-dimensional model data includes steps S1031 to S1033:
step S1031, obtaining two-dimensional coordinate values of each feature point in the updated two-dimensional model data;
in specific implementation, acquiring XY values of all characteristic points in the updated two-dimensional model data;
step S1032, determining a three-dimensional coordinate value corresponding to the two-dimensional coordinate value of each feature point according to the corresponding relationship between the two-dimensional coordinate value and the three-dimensional coordinate value;
specifically, determining a UV value corresponding to the XY value of each feature point according to the corresponding relation between the XY value and the UV value;
step S1033, updating the three-dimensional model data according to the three-dimensional coordinate value of each feature point.
Specifically, the three-dimensional model data is updated according to the UV value of each feature point.
In practical implementation, because the relationship between the two-dimensional model data and the three-dimensional model data is established in step S102, when image processing is performed on the two-dimensional image, the image is instantly displayed on the three-dimensional model, and the image is instantly displayed.
Specifically, since the three-dimensional model (U, V) coordinates of the vase and the start line (origin of coordinates) of the two-dimensional image (X, Y) coordinates are located at the same position of the object, it is ensured that the normalized (U, V) coordinates and (X, Y) coordinates are in one-to-one correspondence, the processor renders the two-dimensional image (for example, performs color/grayscale modification on a certain pixel (X1, Y1)) and collects the (X1, Y1) coordinates and color/grayscale information of the pixel at the same time, finds the (U1, V1) point corresponding to the point (X1, Y1) on the three-dimensional model, and gives the color/grayscale information of the (X1, Y1) point to the (U1, V1), so that the three-dimensional model can be displayed on the real time when the image is rendered on the two-dimensional image.
In the application, when the three-dimensional model needs to be perfect, only the position corresponding to the two-dimensional model needs to be modified and created, and the modified content can be immediately reflected on the three-dimensional model, so that the projection content can be perfect without interrupting projection, the manufacturing cost for developing the projection content is reduced, and the efficiency for developing and manufacturing the projection content is improved.
As shown in fig. 4, it should be further explained that, before acquiring the UV values corresponding to the feature points in the three-dimensional model data, the method further includes step S1011:
step S1011, converting the three-dimensional model data into editable three-dimensional model data.
In summary, in the data processing method in the above embodiments of the present invention, the three-dimensional model data of the target object obtained by scanning with the scanner is obtained, the three-dimensional model data is converted into the two-dimensional model data in the first preset format, when a modification instruction of a pair of two-dimensional model data is received, the two-dimensional model data is updated according to the modification instruction, and the three-dimensional model data is updated and displayed according to the updated two-dimensional model data, so that the condition that the observation image is displayed on the object is achieved, the projection content is created and modified in real time, and the modified content is displayed on the object in real time, thereby reducing the manufacturing cost for developing the projection content, improving the efficiency for developing and manufacturing the projection content, and solving the problems in the prior art that the manufacturing cost for developing the projection content is high, and the efficiency.
Example two
Referring to fig. 10, a data processing apparatus according to a second embodiment of the present invention is shown, which is applied to a data processing device, and includes:
the data acquisition module 11 is configured to acquire three-dimensional model data of a target object scanned by a scanner;
the data conversion module 12 is configured to convert the three-dimensional model data into two-dimensional model data in a first preset format;
and the data updating module 13 is configured to update the two-dimensional model data according to a modification instruction when receiving the modification instruction of a pair of two-dimensional model data, and update and display the three-dimensional model data according to the updated two-dimensional model data.
Wherein the data conversion module 12 comprises
A three-dimensional coordinate value acquisition unit for acquiring a three-dimensional coordinate value corresponding to each feature point in the three-dimensional model data;
and the two-dimensional coordinate value acquisition unit is used for unfolding the three-dimensional model into a two-dimensional model and determining the two-dimensional coordinate value of each feature point on the two-dimensional model according to the corresponding relation between the three-dimensional coordinate value and the two-dimensional coordinate value.
Further, in some optional embodiments of the present invention, the data updating module 13 includes:
a coordinate value updating unit, configured to obtain a two-dimensional coordinate value of each feature point in the updated two-dimensional model data;
the coordinate value determining unit is used for determining a three-dimensional coordinate value corresponding to the two-dimensional coordinate value of each feature point according to the corresponding relation between the two-dimensional coordinate value and the three-dimensional coordinate value;
and the data updating unit is used for updating the three-dimensional model data according to the three-dimensional coordinate value of each feature point.
Further, in some optional embodiments of the present invention, the data conversion module 12 further includes:
a dividing unit configured to divide an outer peripheral surface of the three-dimensional model into a plurality of rectangles;
a rotating unit, configured to rotate the three-dimensional model so that the rectangles rotate one by one to a position right in front of the three-dimensional model;
and the intercepting unit is used for intercepting the rectangles which are positioned in front of the three-dimensional model at each time, and arranging and splicing the rectangles one by one according to the intercepting time sequence to form the two-dimensional model.
Further, in some optional embodiments of the present invention, the data conversion module 12 further includes:
the selection unit is used for selecting a starting line on the outer peripheral surface of the three-dimensional model and dividing the outer peripheral surface of the three-dimensional model into a plurality of rectangles by taking the starting line as a reference.
Further, in some optional embodiments of the present invention, the data conversion module 12 further includes:
the first processing unit is used for carrying out normalization processing on the three-dimensional coordinate value corresponding to the feature point to obtain a normalized three-dimensional coordinate value corresponding to the feature point;
and the second processing unit is used for carrying out normalization processing on the two-dimensional coordinate values corresponding to the feature points to obtain normalized two-dimensional coordinate values corresponding to the feature points.
Further, in some optional embodiments of the present invention, the data conversion module 12 further includes:
and the data conversion unit is used for converting the three-dimensional model data into editable three-dimensional model data.
The functions or operation steps of the modules and units when executed are substantially the same as those of the method embodiments, and are not described herein again.
In summary, in the data processing apparatus in the above embodiment of the present invention, the three-dimensional model data of the target object obtained by scanning with the scanner is obtained, the three-dimensional model data is converted into the two-dimensional model data in the first preset format, when a modification instruction of a pair of two-dimensional model data is received, the two-dimensional model data is updated according to the modification instruction, and the three-dimensional model data is updated and displayed according to the updated two-dimensional model data, so that the condition that the observation image is displayed on the object is immediately created and modified, the modified content is immediately displayed on the object, the manufacturing cost for developing the projection content is reduced, the efficiency for developing and manufacturing the projection content is improved, and the problems in the prior art that the manufacturing cost for developing the projection content is high, and the efficiency for developing and manufacturing the projection content is low are solved.
The present invention also provides a computer-readable storage medium on which a computer program is stored, which program, when executed by a processor, is adapted to carry out the above-mentioned data processing method.
EXAMPLE III
Referring to fig. 11, a data processing apparatus according to a third embodiment of the present invention is shown, which includes a memory 20, a processor 10, and a computer program 30 stored in the memory and running on the processor, and when the processor executes the computer program, the data processing method is implemented.
The processor 10 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor or other data Processing chip in some embodiments, and is used to execute program codes stored in the memory 20 or process data, such as executing an access restriction program.
The memory 20 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 20 may in some embodiments be an internal storage unit of the data processing device, for example a hard disk of the data processing device. The memory 20 may also be an external storage device of the data processing apparatus in other embodiments, such as a plug-in hard disk provided on the data processing apparatus, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 20 may also include both an internal storage unit and an external storage device of the data processing apparatus. The memory 20 may be used not only to store application software installed in the data processing apparatus and various kinds of data, but also to temporarily store data that has been output or will be output.
The scheme of the invention comprises two parts: the device comprises a device part and a software part, wherein the device part comprises a computer and a scanner 100, the scanner 100 mainly obtains point cloud data of a three-dimensional model of an object by scanning the external outline of the object to facilitate the subsequent computer to process the data, the computer comprises a display 300 and a graphic processor 200, the graphic processor 200 is mainly responsible for reading the data of the scanner 100, editing the read data, performing software operation and other functions, the software operation comprises the subsequent processing of rendering a two-dimensional image, displaying the rendered content on the three-dimensional model in real time and the like, the display 300 displays the process of the processor operation, and the device part is shown in fig. 7. The software part comprises reverse modeling software, three-dimensional modeling software and image editable software, wherein the reverse modeling software is mainly responsible for converting point cloud data of a three-dimensional object obtained by the scanner 100 into editable three-dimensional model data, the format of the data can be STP or OBJ, the three-dimensional modeling software is mainly used for converting the format of the three-dimensional model data, the converted format can be read by the image editable software, the three-dimensional model data is expanded to obtain a two-dimensional image, the image editable software is software capable of reading the three-dimensional model data and the two-dimensional model data, the two-dimensional image can be developed and created, and the content creating mode comprises pictures, videos, special effects, colors, gray scales and the like.
The image editable software comprises four modules, namely a two-dimensional image acquisition module, a three-dimensional model acquisition module, an image development and production module and a two-dimensional image and three-dimensional model interaction module. The two-dimensional image acquisition module and the three-dimensional model acquisition module are used for acquiring/reading two-dimensional model data and three-dimensional model data, the image development and manufacturing module is used for manufacturing projection contents, the projection contents are manufactured based on two-dimensional images, and finally the purpose of instantly displaying and modifying the manufactured contents is achieved by utilizing the two-dimensional image and three-dimensional model interaction module, and the module is mainly used for instantly displaying a two-dimensional image created by the image development and manufacturing module on the three-dimensional model data.
It should be noted that the configuration shown in fig. 11 does not constitute a limitation of the data processing apparatus, which may comprise fewer or more components than shown, or some components may be combined, or a different arrangement of components in other embodiments.
In summary, in the data processing device in the above embodiment of the present invention, the three-dimensional model data of the target object obtained by scanning with the scanner is obtained, the three-dimensional model data is converted into the two-dimensional model data in the first preset format, when a modification instruction of a pair of two-dimensional model data is received, the two-dimensional model data is updated according to the modification instruction, and the three-dimensional model data is updated and displayed according to the updated two-dimensional model data, so that the condition that the observation image is displayed on the object is achieved, the projection content is created and modified in real time, and the modified content is displayed on the object in real time, thereby reducing the manufacturing cost for developing the projection content, improving the efficiency for developing and manufacturing the projection content, and solving the problems in the prior art that the manufacturing cost for developing the projection content is high, and the efficiency.
Those of skill in the art will understand that the logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (4)

1. A method of data processing, the method comprising the steps of:
acquiring three-dimensional model data of a target object scanned by a scanner;
converting the three-dimensional model data into two-dimensional model data in a first preset format;
when a modification instruction of a pair of two-dimensional model data is received, updating the two-dimensional model data according to the modification instruction, and updating and displaying the three-dimensional model data according to the updated two-dimensional model data;
the step of converting the three-dimensional model data into two-dimensional model data in a first preset format includes:
establishing a three-dimensional coordinate system by taking the bottom center of the three-dimensional model as a coordinate origin and the axial direction of the three-dimensional model as the Y-axis direction;
setting a starting line of a three-dimensional model, and acquiring UV values corresponding to all characteristic points in the three-dimensional model data;
unfolding the three-dimensional model into a two-dimensional model, and determining the XY value of each characteristic point on the two-dimensional model according to the starting line;
determining a UV value corresponding to the XY value of each feature point according to the corresponding relation between the XY value and the UV value;
updating the three-dimensional model data according to the UV value of each characteristic point;
the step of unfolding the three-dimensional model into a two-dimensional model comprises:
selecting a starting line on the outer peripheral surface of the three-dimensional model, and dividing the outer peripheral surface of the three-dimensional model into a plurality of rectangles by taking the starting line as a reference;
rotating the three-dimensional model to enable the rectangles to rotate to the front of the three-dimensional model one by one;
intercepting rectangles which are positioned right in front of the three-dimensional model each time, and arranging and splicing the rectangles one by one according to the intercepting time sequence to form the two-dimensional model;
the step of obtaining the corresponding normalized UV value and the normalized XY value according to the corresponding relation between the XY value and the UV value comprises the following steps of respectively carrying out normalization processing on the UV value and the XY value corresponding to the feature point to obtain the corresponding normalized UV value and the normalized XY value, wherein the normalized UV value comprises:
creating a rotating surface by passing through the origin of coordinates and the central axis of the three-dimensional model, and setting the rotating starting point of the rotating surface to be 0 degree;
a is any point on the three-dimensional model, and on the intersection line of the rotating surface and the three-dimensional model, the normalized coordinates of the point A are (U, V), U = theta/360 degrees, the value V is the axial height value of the point A divided by the axial height value of the three-dimensional model, and theta is the included angle between the projection of the intersection line of the point A and the rotating starting point of the rotating surface on the bottom surface;
the normalized XY values include:
a1 (X1, Y1) is a point on the two-dimensional model, and (X, Y) is normalized coordinates, so X = X1/1920, Y = Y1/1080, 1920 and 1080 are resolutions 1920 × 1080 of the two-dimensional model.
2. The data processing method according to claim 1, wherein obtaining the UV value corresponding to each feature point in the three-dimensional model data further comprises:
and converting the three-dimensional model data into editable three-dimensional model data.
3. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the data processing method of any one of claims 1-2.
4. A data processing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the data processing method according to any of claims 1-2 when executing the program.
CN202010938064.1A 2020-09-09 2020-09-09 Data processing method, device, storage medium and equipment Active CN111932448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010938064.1A CN111932448B (en) 2020-09-09 2020-09-09 Data processing method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010938064.1A CN111932448B (en) 2020-09-09 2020-09-09 Data processing method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN111932448A CN111932448A (en) 2020-11-13
CN111932448B true CN111932448B (en) 2021-03-26

Family

ID=73309858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010938064.1A Active CN111932448B (en) 2020-09-09 2020-09-09 Data processing method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN111932448B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112614046B (en) * 2020-12-17 2024-02-23 武汉达梦数据技术有限公司 Method and device for drawing three-dimensional model on two-dimensional plane
CN112884870A (en) * 2021-02-26 2021-06-01 深圳市商汤科技有限公司 Three-dimensional model expansion method, electronic device and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390286A (en) * 2013-07-11 2013-11-13 梁振杰 Method and system for modifying virtual characters in games
CN104318002A (en) * 2014-10-17 2015-01-28 上海衣得体信息科技有限公司 Method for converting three-dimensional clothing effect to two-dimensional clothing effect
CN106023309A (en) * 2016-06-15 2016-10-12 云之衣(厦门)服饰有限公司 Garment three-dimensional digital customization method
CN106021804A (en) * 2016-06-06 2016-10-12 同济大学 Model change design method of hydraulic torque converter blade grid system
CN107471218A (en) * 2017-09-07 2017-12-15 南京理工大学 A kind of tow-armed robot hand eye coordination method based on multi-vision visual
CN108073924A (en) * 2016-11-17 2018-05-25 富士通株式会社 Image processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109920050B (en) * 2019-03-01 2020-08-14 中北大学 Single-view three-dimensional flame reconstruction method based on deep learning and thin plate spline
CN110838167B (en) * 2019-11-05 2024-02-06 网易(杭州)网络有限公司 Model rendering method, device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390286A (en) * 2013-07-11 2013-11-13 梁振杰 Method and system for modifying virtual characters in games
CN104318002A (en) * 2014-10-17 2015-01-28 上海衣得体信息科技有限公司 Method for converting three-dimensional clothing effect to two-dimensional clothing effect
CN106021804A (en) * 2016-06-06 2016-10-12 同济大学 Model change design method of hydraulic torque converter blade grid system
CN106023309A (en) * 2016-06-15 2016-10-12 云之衣(厦门)服饰有限公司 Garment three-dimensional digital customization method
CN108073924A (en) * 2016-11-17 2018-05-25 富士通株式会社 Image processing method and device
CN107471218A (en) * 2017-09-07 2017-12-15 南京理工大学 A kind of tow-armed robot hand eye coordination method based on multi-vision visual

Also Published As

Publication number Publication date
CN111932448A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN109427088B (en) Rendering method for simulating illumination and terminal
US6867772B2 (en) 3D computer modelling apparatus
US7742061B2 (en) Method and related apparatus for image processing
US8970586B2 (en) Building controllable clairvoyance device in virtual world
CN108230435B (en) Graphics processing using cube map textures
CN110163831B (en) Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
CN111932448B (en) Data processing method, device, storage medium and equipment
US10853990B2 (en) System and method for processing a graphic object
CN106558017B (en) Spherical display image processing method and system
CN109544674B (en) Method and device for realizing volume light
JPS6380375A (en) Texture mapping device
CN111951333A (en) Automatic six-dimensional attitude data set generation method, system, terminal and storage medium
US6774897B2 (en) Apparatus and method for drawing three dimensional graphics by converting two dimensional polygon data to three dimensional polygon data
US8669996B2 (en) Image processing device and image processing method
CN111179390B (en) Method and device for efficiently previewing CG (content distribution) assets
Borshukov New algorithms for modeling and rendering architecture from photographs
CN111563929B (en) 2.5D webpage development method based on browser
US9734579B1 (en) Three-dimensional models visual differential
Marek et al. Optimization of 3d rendering in mobile devices
CN116630516B (en) 3D characteristic-based 2D rendering ordering method, device, equipment and medium
CN115496818B (en) Semantic graph compression method and device based on dynamic object segmentation
CN112650460B (en) Media display method and media display device
CN111028357B (en) Soft shadow processing method and device of augmented reality equipment
US20220245890A1 (en) Three-dimensional modelling from photographs in series
WO2015173954A1 (en) Drawing device, drawing method, and computer program for drawing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant