CN110288707B - Three-dimensional dynamic modeling method and system - Google Patents

Three-dimensional dynamic modeling method and system Download PDF

Info

Publication number
CN110288707B
CN110288707B CN201910599529.2A CN201910599529A CN110288707B CN 110288707 B CN110288707 B CN 110288707B CN 201910599529 A CN201910599529 A CN 201910599529A CN 110288707 B CN110288707 B CN 110288707B
Authority
CN
China
Prior art keywords
optical device
moving object
dimensional
target object
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910599529.2A
Other languages
Chinese (zh)
Other versions
CN110288707A (en
Inventor
郑万林
段浩扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CSIC Orlando Wuxi Software Technology Co.,Ltd.
Original Assignee
Csic Orlando Wuxi Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Csic Orlando Wuxi Software Technology Co ltd filed Critical Csic Orlando Wuxi Software Technology Co ltd
Priority to CN201910599529.2A priority Critical patent/CN110288707B/en
Publication of CN110288707A publication Critical patent/CN110288707A/en
Application granted granted Critical
Publication of CN110288707B publication Critical patent/CN110288707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional dynamic modeling method and a system thereof, wherein the three-dimensional dynamic modeling method specifically comprises the following steps: obtaining pictures of all dimensions of a moving object; extracting at least one contour picture in a plurality of dimension pictures; generating a reference coordinate system according to the contour picture; a dynamic three-dimensional model of the moving object is generated based on the reference coordinate system. According to the method and the device, the dynamic three-dimensional model can be generated according to the plurality of static pictures shot by the camera, so that the moving action of the moving object in the moving process is more flexible and three-dimensional in an intuitive state.

Description

Three-dimensional dynamic modeling method and system
Technical Field
The present application relates to the field of images, and in particular, to a method and a system for three-dimensional dynamic modeling.
Background
In the prior art, a stereo camera is generally used for shooting objects such as people and animals which can move, a plurality of groups of images can be obtained by shooting, thereby generating a three-dimensional model of the moving object being photographed, but since a plurality of sets of images are in a still state, the conventional manner of generating a three-dimensional model is only to perform the generation of a three-dimensional model by photographing the moving object for a plurality of times or at a plurality of angles, however, if the moving object moves, a dynamic three-dimensional model cannot be generated, so the conventional three-dimensional model generation method is usually based on the three-dimensional model of the moving object in a static state, cannot reflect the action change of the moving object in the moving process, therefore, a more flexible method for generating a three-dimensional model is needed, which can effectively generate a dynamic three-dimensional model of a moving object without affecting the normal movement of the moving object.
Disclosure of Invention
The present application is directed to a method and a system for three-dimensional dynamic modeling, which can generate a dynamic three-dimensional model according to a plurality of static pictures taken by a camera, so that a moving motion of a moving object in a moving process is more flexible and stereoscopic in an intuitive state.
In order to achieve the above object, the present application provides a method for three-dimensional dynamic modeling, which specifically includes the following steps: obtaining pictures of all dimensions of a moving object; extracting at least one contour picture in a plurality of dimension pictures; generating a reference coordinate system according to the contour picture; a dynamic three-dimensional model of the moving object is generated based on the reference coordinate system.
As described above, the dimension pictures are pictures of various angles when the moving object moves at a certain movement time.
The method includes the steps of selecting at least one clear picture with a complete moving object from the plurality of dimensional pictures as the contour picture, and performing contour segmentation on the contour picture.
As above, wherein selecting the outline picture specifically comprises the following sub-steps: determining a target object area in the dimension picture; determining whether a target object is present from at least one target object region; if the target object exists, comparing the pixels of the target object and the original moving object; and if the pixel difference value between the target object and the original moving object does not exceed a specified threshold value, defining the picture of the target object region of the target object with the target object as a contour picture.
As above, if the target object does not exist, the target object area is checked to see whether the calibration is wrong.
As above, wherein the reference coordinate system is represented as:
Figure GDA0002816694200000021
wherein S is a distance between a first optical device and a second optical device for photographing a moving object in a camera, A is a horizontal coordinate in an image of the moving object photographed by the first optical device, B is a vertical coordinate in the image of the moving object photographed by the first optical device, A-A' is a difference between horizontal coordinates when the moving object is photographed by the first optical device and the second optical device, F is a focal length of the first optical device and the second optical device, and the focal lengths of the first optical device and the second optical device are equal.
The above method, before generating the dynamic three-dimensional model, further comprising generating a reference three-dimensional model according to a reference coordinate system, wherein the reference model is a base model for generating the dynamic three-dimensional model.
As above, wherein the generating of the dynamic three-dimensional model specifically comprises the following sub-steps: acquiring a comparison three-dimensional coordinate of each moving picture; generating a comparison three-dimensional model by using the comparison three-dimensional coordinates; comparing the comparison three-dimensional model with the reference three-dimensional model, and selecting a region different from the reference three-dimensional model; synthesizing a region distinguished from the reference three-dimensional model in the reference three-dimensional model; in response to the region synthesis, a dynamic three-dimensional model of the moving object is generated.
A three-dimensional dynamic modeling system specifically comprises an acquisition unit, an extraction unit and a generation unit; an acquisition unit that acquires a dimensional picture of a moving object; the extraction unit is used for extracting a contour picture from the dimension picture; and the generating unit is used for generating a reference coordinate system according to the contour picture and finally generating a dynamic three-dimensional model.
As above, the extraction unit includes the following sub-modules: the device comprises a determining module, a comparing module and a checking module; the determining module is used for determining a target object area in the dimension picture and whether a target object exists in the target object area; the comparison module is used for comparing the pixels of the target object and the original moving object if the target object exists in the target object area; and the inspection module is used for inspecting the target object region if the target object does not exist in the target object region.
The application has the following beneficial effects:
(1) the three-dimensional dynamic modeling method and the system thereof can generate a dynamic three-dimensional model according to a plurality of static pictures shot by the camera, so that the moving action of a moving object in the moving process is more flexible and three-dimensional in an intuitive state.
(2) The three-dimensional dynamic modeling method and the system thereof can select clear outline pictures with complete moving objects from a plurality of dimensional pictures, remove the regions which are not needed by the dimensional pictures and the dynamic three-dimensional model, and reduce the operation time of the subsequent generation of the dynamic three-dimensional model.
(3) The three-dimensional dynamic modeling method and the system thereof can complete the synthesis of the area through the transformation of the coordinates, and finally complete the generation of the dynamic three-dimensional model, thereby ensuring the accuracy in the synthesis process and the accuracy of completing the dynamic three-dimensional model.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of a method of three-dimensional dynamic modeling provided in accordance with an embodiment of the present application;
FIG. 2 is an internal block diagram of a three-dimensional dynamic modeling system provided in accordance with an embodiment of the present application;
FIG. 3 is a diagram of internal sub-modules of a three-dimensional dynamic modeling system provided in accordance with an embodiment of the present application;
fig. 4 is a diagram of another internal sub-module structure of a three-dimensional dynamic modeling system provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application relates to a method and a system for three-dimensional dynamic modeling. According to the method and the device, the dynamic three-dimensional model can be generated according to the plurality of static pictures shot by the camera, so that the moving action of the moving object in the moving process is more flexible and three-dimensional in an intuitive state.
Fig. 1 is a flowchart illustrating a method for three-dimensional dynamic modeling provided in the present application.
Step S110: pictures of various dimensions of the moving object are obtained.
Specifically, the dimension picture is a picture of each angle when the moving object moves at a certain moving time. For example, if the moving object moves forward by one step, the pictures taken at various angles during the moving process are dimensional pictures. The number of dimension pictures is multiple. Preferably, each time one action is moved, the corresponding dimension pictures are all multiple.
Preferably, the dimension picture of the moving object can be acquired within a specified moving range, and the specified moving range can be a range which is set by a system in advance for the moving object or a range which is manually selected by a user according to requirements.
Step S120: and extracting at least one contour picture in the multiple dimension pictures.
Specifically, since the dimension picture is a picture of moving an object at each angle at a certain time, at a certain moving time, the moving object appearing in the dimension picture is clear or fuzzy, and there may be a possibility of a partial moving object, a complete moving object or no moving object, and therefore, only at least one clear picture with a complete moving object needs to be selected as the contour picture for use.
Further, extracting a profile picture from the dimension picture specifically includes the following sub-steps:
step P1: and determining a target object area in the dimension picture.
The target object region may be obtained by dividing the dimensional picture into a plurality of small squares, and taking the small squares with 4 × 4 or other numerical values as the target object region.
Specifically, the number of target object regions is plural.
Step P2: from at least one target object region, it is determined whether a target object is present.
Since there may be moving objects partially or completely different from the surrounding environment in the target object region, it is necessary to determine whether there are real moving objects (real moving objects are referred to as target objects) in the target object region, and if there are target objects, step P3 is performed. Otherwise, step P5 is executed.
Further, before determining the target object, the method further includes storing the moving object before moving in a clear image or picture manner, where the stored moving object is referred to as an original moving object in this embodiment.
Comparing the target object region with the original moving object, if a certain region in the target object region is found to be overlapped with part or all of the original moving object, determining that the target object does exist in the target object region, further, if a certain region in the target object region is overlapped with part of the original moving object, indicating that the target object in the target object region is incomplete, executing step P1, and re-selecting the target object region until the region which is completely existed with the original moving object exists in the target object region. If a part of the target object region coincides with all of the original moving objects, step P3 is executed. If the image comparison fails, the part or the moving object is not the target object, and step P5 is executed.
Preferably, the image comparison between the target object region and the original moving object is referred to the image comparison in the prior art, which is not described herein again.
Step P3: and comparing the pixels of the target object and the original moving object.
If the pixel difference between the target object and the original moving object does not exceed the specified threshold, the definition of the target object is considered to meet the requirement, and step P4 is executed. Otherwise, the dimension picture of the moving object is obtained again.
Step P4: and defining a picture in which a target object region of the target object in which the target object exists as a contour picture.
Step P5: and checking the target object area to check whether the calibration is wrong.
If the calibration is incorrect, step P1 is executed again, otherwise the process exits.
Through the steps, the range of the whole dimension picture can be reduced, so that the clear picture occupying a small memory is selected as the outline picture, and the operation time for subsequently generating the dynamic three-dimensional model is reduced.
Step S130: and generating a reference coordinate system according to the contour picture.
In particular, at least one reference coordinate system is generated from at least one contour picture.
Wherein the reference coordinate system is the three-dimensional coordinates of the contour picture in the camera coordinate system. Specifically, since the profile picture is selected from a dimensional picture taken by a camera, the three-dimensional coordinates generated by the profile picture are coordinates converted when the camera is used as a viewpoint for projection, and therefore, at least one reference coordinate system can be expressed as:
Figure GDA0002816694200000061
where S is a distance between the first optical device and the second optical device for photographing the moving object in the camera, a is a lateral coordinate in the image of the moving object photographed by the first optical device (where the image of the moving object in step S130 is the image in the outline picture), B is a longitudinal coordinate in the image of the moving object photographed by the first optical device, a-a' is a difference between lateral coordinates when the moving object is photographed by the first optical device and the second optical device, and F is a focal length of the first optical device and the second optical device, where the focal lengths of the first optical device and the second optical device are equal.
It should be noted that, since the first optical device and the second optical device capture images of moving objects moving at any time to form a dimensional image, the horizontal coordinate and the vertical coordinate of the moving object during or after capturing are respectively represented by a and B, where a and B represent horizontal and vertical coordinates with different values.
Step S140: a dynamic three-dimensional model of the moving object is generated based on the reference coordinate system.
Before generating the dynamic three-dimensional model, generating a reference three-dimensional model according to a reference coordinate system. Wherein the reference three-dimensional model is a base model of a dynamic three-dimensional model of a synthetic moving object. The reference three-dimensional model is a model of the moving object in a stationary state.
The method comprises the following steps of:
and D1, acquiring the comparison three-dimensional coordinates of each moving picture.
The moving picture is a dimension picture contained after removing a certain dimension picture selected from the outline pictures. For example, if an outline picture is selected from one of 10 dimensional pictures, the remaining 9 dimensional pictures are defined as moving pictures.
The expression of the three-dimensional coordinates is compared in the same way as the expression of the formula I, and the three-dimensional coordinates are converted by taking the camera as a viewpoint.
The specific three-dimensional coordinates for each alignment can be expressed as:
Figure GDA0002816694200000071
wherein S is a distance between the first optical device and the second optical device for photographing the moving object in the camera, A ' is a lateral coordinate in the image of the moving object in the moving picture photographed by the first optical device, B is a longitudinal coordinate in the image of the moving object in the moving picture photographed by the first optical device, A ' -A ' is a difference between the lateral coordinates of the moving object after the moving picture is photographed by the first optical device and the second optical device, and F is a focal length of the first optical device and the second optical device, wherein the focal lengths of the first optical device and the second optical device are equal.
Step D2: and generating a comparison three-dimensional model by using the comparison three-dimensional coordinates.
Specifically, the coordinates in the formula two are used to generate a comparison three-dimensional model.
Step D3: and comparing the comparison three-dimensional model with the reference three-dimensional model, and selecting a region different from the reference three-dimensional model.
Specifically, in the comparison process, a three-dimensional moving object in the reference three-dimensional model is taken as a reference.
For example, if the reference three-dimensional model is a front stationary (or other direction such as a reverse side) three-dimensional model of the moving object, the comparison three-dimensional model is moving forward, and the left foot is drawn (the moving object may be in various forms, for example, in the form of a human in this embodiment), and an area different from the stationary reference three-dimensional model, that is, the drawn left foot appears in the comparison three-dimensional model, step D4 is executed. For example, if the front surface of the moving object is divided into a plurality of regions, A, B, D … …, etc., and a region A, B, C, D appears in the aligned three-dimensional model, the region C is a region different from the reference three-dimensional model.
Step D4: regions distinguished from the reference three-dimensional model are synthesized in the reference three-dimensional model.
For example, if the region D appears in the comparison model, the region D is synthesized in the corresponding reference model.
Specifically, before the region synthesis, the method further comprises the step of converting the coordinates of the synthesized region.
Since the transformed coordinates are coordinates in the reference three-dimensional model, they are represented by a coordinate system X, Y, Z, which may be specifically represented as,
Figure GDA0002816694200000081
x, Y, Z is the coordinate system represented by the reference three-dimensional model as defined in formula one, and X ', Y ', Z ' are the coordinates in the comparison three-dimensional model as defined in formula two. Wherein
Figure GDA0002816694200000082
Is an orthogonal rotation matrix, wherein r1, r2... r9 is the value of the coordinate system,
Figure GDA0002816694200000083
when r1 is 1, r2 is 0, and r3 is 0;
Figure GDA0002816694200000084
r2=0,r5=1,r8=0;
Figure GDA0002816694200000085
r3=0,r6=0,r9=1;T=(Tx、Ty、Tz) As translation vectors, Tx、Ty、TzThe three-dimensional coordinates of the origin of the three-dimensional model in the coordinate system of the reference three-dimensional model are compared.
When the conversion of the coordinates is completed in step D4, the coordinates after the conversion are surely overlapped with the coordinates of the corresponding region in the reference three-dimensional model, and therefore the regions after the coordinate conversion can be combined with the corresponding region in the reference three-dimensional model.
Step D5: in response to the region synthesis, a dynamic three-dimensional model of the moving object is generated.
And defining the reference three-dimensional model after the region synthesis as a moving object three-dimensional model. Because the number of the moving pictures is multiple, a plurality of comparison three-dimensional models can be generated, and the regions in the comparison three-dimensional models with different actions are synthesized in the reference three-dimensional model, so that the dynamic three-dimensional model of the moving object can be obtained.
The present application further includes providing a three-dimensional dynamic modeling system, as shown in fig. 2, which includes an acquiring unit 201, an extracting unit 202, an imaging unit 203, and a generating unit 204.
Wherein the acquiring unit 201 is used for acquiring a dimension picture of a moving object.
The extracting unit 202 is connected to the obtaining unit 201, and is configured to extract a contour picture from the dimension picture.
The imaging unit 203 is used to capture a moving object.
The generating unit 204 is connected to the extracting unit 202 and the imaging unit 203, respectively, and is configured to generate a reference coordinate system from the contour picture, and finally generate a dynamic three-dimensional model.
Further, as shown in fig. 3, the extracting unit 202 includes a determining module 301, a comparing module 302, and a checking module 303.
The determining module 301 is configured to determine a target object region in the dimension picture and whether a target object exists in the target object region.
The comparing module 302 is connected to the determining module 301, and configured to compare pixels of the target object and pixels of the original moving object if the target object exists in the target object region.
The checking module 303 is connected to the determining module 301, and is configured to check the target object region if the target object does not exist in the target object region.
Still further, as shown in fig. 4, the generation unit 204 is connected to the imaging unit 203, and is divided into a first generation unit 401, a second generation unit 402, and a synthesis unit 403.
The first generation unit 401 is configured to generate a reference three-dimensional model according to the horizontal and vertical coordinates of a selected contour image in dimensional images captured by the first and second optical devices (not shown in the figure) in the image capturing unit 203.
The second generating unit 402 is configured to generate a comparison three-dimensional model, and specifically, the second generating unit 402 further includes a coordinate obtaining module and a model generating module (not shown in the figure), where the coordinate obtaining module is configured to obtain comparison three-dimensional coordinates of moving pictures captured by the first and second optical devices in the image capturing unit 203, and the model generating module is connected to the coordinate obtaining module and configured to generate the comparison three-dimensional model according to the comparison three-dimensional coordinates.
The synthesizing unit 403 is connected to the first generating unit 401 and the second generating unit 402, and is configured to synthesize a certain region of the aligned three-dimensional model in the reference three-dimensional model, and finally form a dynamic three-dimensional model of the moving object.
The application has the following beneficial effects:
(1) the three-dimensional dynamic modeling method and the system thereof can generate a dynamic three-dimensional model according to a plurality of static pictures shot by the camera, so that the moving action of a moving object in the moving process is more flexible and three-dimensional in an intuitive state.
(2) The three-dimensional dynamic modeling method and the system thereof can select clear outline pictures with complete moving objects from a plurality of dimensional pictures, remove the regions which are not needed by the dimensional pictures and the dynamic three-dimensional model, and reduce the operation time of the subsequent generation of the dynamic three-dimensional model.
(3) The three-dimensional dynamic modeling method and the system thereof can complete the synthesis of the area through the transformation of the coordinates, and finally complete the generation of the dynamic three-dimensional model, thereby ensuring the accuracy in the synthesis process and the accuracy of completing the dynamic three-dimensional model.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A three-dimensional dynamic modeling method is characterized by comprising the following steps:
obtaining pictures of all dimensions of a moving object;
extracting at least one contour picture in a plurality of dimension pictures;
generating a reference coordinate system according to the contour picture;
generating a dynamic three-dimensional model of the moving object based on the reference coordinate system;
wherein the reference coordinate system is expressed as:
Figure FDA0002816694190000011
wherein S is a distance between a first optical device and a second optical device for photographing a moving object in a camera, A is a horizontal coordinate in an image of the moving object photographed by the first optical device, B is a longitudinal coordinate in the image of the moving object photographed by the first optical device, A-A' is a difference between horizontal coordinates when the moving object is photographed by the first optical device and the second optical device, F is a focal length of the first optical device and the second optical device, and the focal lengths of the first optical device and the second optical device are equal;
the generation of the dynamic three-dimensional model comprises in particular the following sub-steps:
acquiring a comparison three-dimensional coordinate of each moving picture;
generating a comparison three-dimensional model by using the comparison three-dimensional coordinates;
comparing the comparison three-dimensional model with the reference three-dimensional model, and selecting a region different from the reference three-dimensional model;
synthesizing a region distinguished from the reference three-dimensional model in the reference three-dimensional model;
generating a dynamic three-dimensional model of the moving object in response to the region synthesis;
comparing the three-dimensional coordinates, wherein the representation of the three-dimensional coordinates is the same as the representation of the formula I, and the three-dimensional coordinates are converted by taking a camera as a viewpoint;
the aligned three-dimensional coordinates can be expressed as:
Figure FDA0002816694190000012
wherein S is a distance between a first optical device and a second optical device for photographing a moving object in a camera, A 'is a lateral coordinate in an image of the moving object in a moving picture photographed by the first optical device, B' is a longitudinal coordinate in the image of the moving object in the moving picture photographed by the first optical device, A '-A' is a difference between the lateral coordinates of the moving object after photographing the moving picture by the first optical device and the second optical device, and F is a focal length of the first optical device and the second optical device, wherein the focal lengths of the first optical device and the second optical device are equal;
a reference three-dimensional model is generated according to a reference coordinate system, and the reference three-dimensional model is a basic model for synthesizing a dynamic three-dimensional model of a moving object.
2. The method according to claim 1, wherein the dimension pictures are pictures of angles at which the moving object moves at a certain movement time.
3. The method according to claim 2, wherein the number of the dimension pictures is plural, and at least one clear picture with a complete moving object is selected from the plural dimension pictures to be used as the contour picture.
4. Method for three-dimensional dynamic modeling according to claim 3, characterized in that the selection of the profile picture comprises in particular the sub-steps of:
determining a target object area in the dimension picture;
determining whether a target object is present from at least one target object region;
if the target object exists, comparing the pixels of the target object and the original moving object;
if the pixel difference value between the target object and the original moving object does not exceed a specified threshold value, defining a picture in which a target object area of the target object with the target object exists as a contour picture;
before the target object is determined, storing the moving object before moving in a clear image or picture mode;
and comparing the target object region with the original moving object, and if a certain region in the target object region is found to be partially or completely overlapped with the original moving object, determining that the target object exists in the target object region.
5. The method of three-dimensional dynamic modeling according to claim 4, wherein if no target object exists, the target object area is checked to see if there is a calibration error.
6. The method of three-dimensional dynamic modeling according to claim 1, further comprising, prior to generating the dynamic three-dimensional model, generating a reference three-dimensional model from a reference coordinate system, the reference model being a base model for generating the dynamic three-dimensional model.
7. A three-dimensional dynamic modeling system is characterized by comprising an acquisition unit, an extraction unit and a generation unit;
an acquisition unit that acquires a dimensional picture of a moving object;
the extraction unit is used for extracting a contour picture from the dimension picture;
the generating unit generates a reference coordinate system according to the contour picture and finally generates a dynamic three-dimensional model;
wherein the reference coordinate system is expressed as:
Figure FDA0002816694190000031
wherein S is a distance between a first optical device and a second optical device for photographing a moving object in a camera, A is a horizontal coordinate in an image of the moving object photographed by the first optical device, B is a longitudinal coordinate in the image of the moving object photographed by the first optical device, A-A' is a difference between horizontal coordinates when the moving object is photographed by the first optical device and the second optical device, F is a focal length of the first optical device and the second optical device, and the focal lengths of the first optical device and the second optical device are equal;
the generation of the dynamic three-dimensional model comprises in particular the following sub-steps:
acquiring a comparison three-dimensional coordinate of each moving picture;
generating a comparison three-dimensional model by using the comparison three-dimensional coordinates;
comparing the comparison three-dimensional model with the reference three-dimensional model, and selecting a region different from the reference three-dimensional model;
synthesizing a region distinguished from the reference three-dimensional model in the reference three-dimensional model;
generating a dynamic three-dimensional model of the moving object in response to the region synthesis;
comparing the three-dimensional coordinates, wherein the representation of the three-dimensional coordinates is the same as the representation of the formula I, and the three-dimensional coordinates are converted by taking a camera as a viewpoint;
the aligned three-dimensional coordinates can be expressed as:
Figure FDA0002816694190000041
wherein S is a distance between a first optical device and a second optical device for photographing a moving object in a camera, A 'is a lateral coordinate in an image of the moving object in a moving picture photographed by the first optical device, B' is a longitudinal coordinate in the image of the moving object in the moving picture photographed by the first optical device, A '-A' is a difference between the lateral coordinates of the moving object after photographing the moving picture by the first optical device and the second optical device, and F is a focal length of the first optical device and the second optical device, wherein the focal lengths of the first optical device and the second optical device are equal;
a reference three-dimensional model is generated according to a reference coordinate system, and the reference three-dimensional model is a basic model for synthesizing a dynamic three-dimensional model of a moving object.
8. The three-dimensional dynamic modeling system of claim 7, wherein the extraction unit includes the following sub-modules: the device comprises a determining module, a comparing module and a checking module;
the determining module is used for determining a target object area in the dimension picture and whether a target object exists in the target object area;
the comparison module is used for comparing the pixels of the target object and the original moving object if the target object exists in the target object area;
the inspection module is used for inspecting the target object region if the target object does not exist in the target object region;
before the target object is determined, storing the moving object before moving in a clear image or picture mode;
and comparing the target object region with the original moving object, and if a certain region in the target object region is found to be partially or completely overlapped with the original moving object, determining that the target object exists in the target object region.
CN201910599529.2A 2019-07-04 2019-07-04 Three-dimensional dynamic modeling method and system Active CN110288707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910599529.2A CN110288707B (en) 2019-07-04 2019-07-04 Three-dimensional dynamic modeling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910599529.2A CN110288707B (en) 2019-07-04 2019-07-04 Three-dimensional dynamic modeling method and system

Publications (2)

Publication Number Publication Date
CN110288707A CN110288707A (en) 2019-09-27
CN110288707B true CN110288707B (en) 2021-05-25

Family

ID=68020562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910599529.2A Active CN110288707B (en) 2019-07-04 2019-07-04 Three-dimensional dynamic modeling method and system

Country Status (1)

Country Link
CN (1) CN110288707B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266775B (en) * 2022-03-03 2022-05-24 深圳市帝景光电科技有限公司 Street lamp illumination control method and system for moving object detection
CN115361500A (en) * 2022-08-17 2022-11-18 武汉大势智慧科技有限公司 Image acquisition method and system for three-dimensional modeling and three-dimensional modeling method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208116A (en) * 2010-03-29 2011-10-05 卡西欧计算机株式会社 3D modeling apparatus and 3D modeling method
CN107134008A (en) * 2017-05-10 2017-09-05 广东技术师范学院 A kind of method and system of the dynamic object identification based under three-dimensional reconstruction

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004041944A1 (en) * 2004-08-28 2006-03-16 Hottinger Gmbh & Co. Kg Method for the three-dimensional measurement of objects of any kind
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN107170037A (en) * 2016-03-07 2017-09-15 深圳市鹰眼在线电子科技有限公司 A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN107292948B (en) * 2016-04-12 2021-03-26 香港理工大学 Human body modeling method and device and electronic equipment
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN109840940B (en) * 2019-02-11 2023-06-27 清华-伯克利深圳学院筹备办公室 Dynamic three-dimensional reconstruction method, device, equipment, medium and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208116A (en) * 2010-03-29 2011-10-05 卡西欧计算机株式会社 3D modeling apparatus and 3D modeling method
CN107134008A (en) * 2017-05-10 2017-09-05 广东技术师范学院 A kind of method and system of the dynamic object identification based under three-dimensional reconstruction

Also Published As

Publication number Publication date
CN110288707A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110998659B (en) Image processing system, image processing method, and program
JP4297197B2 (en) Calibration processing apparatus, calibration processing method, and computer program
WO2017183470A1 (en) Three-dimensional reconstruction method
KR102135770B1 (en) Method and apparatus for reconstructing 3d face with stereo camera
JP6793151B2 (en) Object tracking device, object tracking method and object tracking program
JP2019057248A (en) Image processing system, image processing device, image processing method and program
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
US9183634B2 (en) Image processing apparatus and image processing method
CN113689578B (en) Human body data set generation method and device
JP2018169690A (en) Image processing device, image processing method, and image processing program
CN110288707B (en) Three-dimensional dynamic modeling method and system
CN106296574A (en) 3-d photographs generates method and apparatus
CN115035235A (en) Three-dimensional reconstruction method and device
CN111680573B (en) Face recognition method, device, electronic equipment and storage medium
JP2019106145A (en) Generation device, generation method and program of three-dimensional model
JP2007025863A (en) Photographing system, photographing method, and image processing program
CN106780474B (en) Kinect-based real-time depth map and color map registration and optimization method
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
KR20190055632A (en) Object reconstruction apparatus using motion information and object reconstruction method using thereof
JP2005031044A (en) Three-dimensional error measuring device
CN115880206A (en) Image accuracy judging method, device, equipment, storage medium and program product
JP2023016187A (en) Image processing method, computer program, and image processing device
CN111833441A (en) Face three-dimensional reconstruction method and device based on multi-camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210508

Address after: No. 222, Shanshui East Road, Binhu District, Wuxi City, Jiangsu Province, 214000

Applicant after: CSIC Orlando Wuxi Software Technology Co.,Ltd.

Address before: 101300 room 3001, 3rd floor, 102 door, building 8, yard 12, Xinzhong street, Nanfaxin Town, Shunyi District, Beijing

Applicant before: Beijing Weijie Dongbo Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant