CN115984458B - Method, system and controller for extracting target object model based on radiation field - Google Patents

Method, system and controller for extracting target object model based on radiation field Download PDF

Info

Publication number
CN115984458B
CN115984458B CN202211590074.6A CN202211590074A CN115984458B CN 115984458 B CN115984458 B CN 115984458B CN 202211590074 A CN202211590074 A CN 202211590074A CN 115984458 B CN115984458 B CN 115984458B
Authority
CN
China
Prior art keywords
model
radiation field
observation
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211590074.6A
Other languages
Chinese (zh)
Other versions
CN115984458A (en
Inventor
陈倩影
徐亚波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Hengqin Global Space Artificial Intelligence Co ltd
Original Assignee
Guangdong Hengqin Global Space Artificial Intelligence Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Hengqin Global Space Artificial Intelligence Co ltd filed Critical Guangdong Hengqin Global Space Artificial Intelligence Co ltd
Priority to CN202211590074.6A priority Critical patent/CN115984458B/en
Publication of CN115984458A publication Critical patent/CN115984458A/en
Application granted granted Critical
Publication of CN115984458B publication Critical patent/CN115984458B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses a method, a system and a controller for extracting a target object model based on a radiation field, wherein a first observation angle of the radiation field model is obtained through the method for extracting the target object model based on the radiation field, the radiation field model comprises a plurality of object models, a first observation image corresponding to the first observation angle is obtained according to the radiation field model, a first frame selection frame corresponding to the object model in the first observation image is obtained, the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, the first target image is an image obtained by projecting the target object model on the first observation image, back projection processing is carried out according to the first frame selection frame, a first model extraction view cone is obtained, the object model outside the range of the first model extraction view cone in the radiation field model is cut, and the target object model is obtained.

Description

Method, system and controller for extracting target object model based on radiation field
Technical Field
The application relates to the technical field of 3D modeling, in particular to a method, a system and a controller for extracting a target object model based on a radiation field.
Background
In the prior art, modeling by using a radiation field model is a novel 3D modeling method, and the specific implementation principle is that a space is modeled by a series of 2D images in a certain scene, and modeling by using the radiation field model has the advantages of independent depth data, vividness in rendering and the like;
however, in the practical process of modeling by using the radiation field model, because the surrounding object is required to be shot, at this time, some scenes outside the object such as a wall surface, a ground surface and the like are inevitably shot, so that background structures such as a wall surface, a ground surface and the like are inevitably generated during modeling, and the background structures are models of non-target objects, so that the use of the radiation field model is affected, the image needs to be subjected to object segmentation processing, and each 2D image needs to be subjected to very precise segmentation, so that only the model of the target object can be effectively reflected in the scene model, but the accurate segmentation operation of all training images is very difficult to realize, time and labor are wasted, the generation speed in the radiation field model is seriously affected, and a user cannot effectively cut the non-target objects in the radiation field model, and further the target object model in the radiation field model cannot be effectively extracted.
Disclosure of Invention
The embodiment of the application provides a method, a system and a controller for extracting a target object model based on a radiation field, which can at least ensure that a frame selection frame is obtained through an observation angle and an observation image, a model extraction viewing cone is obtained according to the frame selection frame, and then a non-target object model positioned outside the range of the model extraction viewing cone in the radiation field model is subjected to cutting treatment, namely the radiation field model is directly subjected to cutting treatment, and further all training images are not required to be accurately segmented, so that the target object model in the radiation field model can be simply and effectively extracted.
In a first aspect, an embodiment of the present application provides a method for extracting a target object model based on a radiation field, where the method includes:
acquiring a first observation angle of a radiation field model, wherein the radiation field model comprises a plurality of object models;
obtaining a first observation image corresponding to the first observation angle according to the radiation field model;
acquiring a first frame selection frame corresponding to the object model in the first observation image, wherein the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, and the first target image is an image obtained by projecting the target object model on the first observation image;
Performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone;
and according to the first model extraction view cone, cutting the object model which is positioned outside the range of the first model extraction view cone in the radiation field model to obtain the target object model.
In some embodiments, after the object model outside the range of the model extraction viewing cone in the radiation field model is subjected to the transparency processing according to the first model extraction viewing cone, the method further includes:
acquiring a second observation angle of the radiation field model;
obtaining a second observation image corresponding to the second observation angle according to the radiation field model;
acquiring a second frame selection frame corresponding to the object model in the second observation image, wherein the second frame selection frame is used for selecting a second target image on the second observation image in a frame mode, and the second target image is an image obtained by projecting the target object model on the second observation image;
performing back projection processing on the second frame selection frame to obtain a second model extraction viewing cone;
and extracting a viewing cone according to the second model, and cutting the object model which is positioned outside the range of the second model extraction viewing cone in the radiation field model.
In some embodiments, the acquiring a first observation angle of the radiation field model comprises:
acquiring a preset training image set for generating the radiation field model;
acquiring a preset angle set corresponding to the preset training image set;
and obtaining a target preset angle from the preset angle set, and determining the target preset angle as the first observation angle.
In some embodiments, the obtaining, according to the radiation field model, a first observation image corresponding to the first observation angle includes:
obtaining a target training image from the preset training image set according to the target preset angle;
the target training image is determined as the first observation image.
In some embodiments, the obtaining, according to the radiation field model, a first observation image corresponding to the first observation angle includes:
rendering the radiation field model according to the first observation angle to obtain a rendered image;
the rendered image is determined as the first observation image.
In some embodiments, the performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone includes:
obtaining the position information of the observation point according to the first observation image;
Obtaining a viewing cone edge according to the position information of the observation point and the first frame selection frame;
and obtaining a first model to extract the viewing cone according to the viewing cone edge.
In some embodiments, clipping the object model of the radiation field model located outside the range of the first model extraction viewing cone according to the first model extraction viewing cone comprises:
according to the first model extraction view cone, setting the opacity of the object model outside the range of the model extraction view cone in the radiation field model to 0;
or, extracting a view cone according to the first model, and reducing the modeling area of the radiation field model to a model area corresponding to the view cone extracted by the first model.
In some embodiments, the performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone further includes:
determining a region, which is located in the first model extraction view cone range, in the radiation field model as a target region;
determining a region, which is positioned outside the first model extraction viewing cone range, in the radiation field model as a non-target region;
and cutting the non-target area model in the radiation field model to obtain a target area model in the radiation field model, and determining the target area model as the target object model.
In some embodiments, the observation angles of the radiation field model include an X-axis projection angle, a Y-axis projection angle, and a Z-axis projection angle.
In a second aspect, an embodiment of the present application provides a radiation field-based object model extraction system, the system including:
the angle acquisition module is used for acquiring a first observation angle of a radiation field model, wherein the radiation field model comprises a plurality of object models;
the image generation module is used for obtaining a first observation image corresponding to the first observation angle according to the radiation field model;
the frame acquisition module is used for acquiring a first frame selection frame corresponding to the object model in the first observation image, wherein the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, and the first target image is an image obtained by projecting the target object model on the first observation image;
the view cone generating module is used for performing back projection processing according to the first frame selection frame to obtain a first model extraction view cone;
and the clipping processing module is used for extracting a viewing cone according to the first model, clipping the object model which is positioned outside the range of the first model extraction viewing cone in the radiation field model, and obtaining the target object model.
In a third aspect, an embodiment of the present application provides a controller, including a memory, a processor and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for extracting a radiation field-based object model according to any one of the embodiments of the first aspect when the processor executes the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium storing computer-executable instructions for performing the method for extracting a radiation field-based object model according to any one of the embodiments of the first aspect.
The application has at least the following beneficial effects: according to the method for extracting the target object model based on the radiation field, the first observation angle of the radiation field model is obtained, the radiation field model comprises a plurality of object models, a first observation image corresponding to the first observation angle is obtained according to the radiation field model, a first frame selection border corresponding to the object model in the first observation image is obtained, the first frame selection border is used for selecting a first target image on the first observation image in a frame mode, the first target image is an image obtained by projecting the target object model on the first observation image, back projection processing is carried out according to the first frame selection border, a first model extraction view cone is obtained, a view cone is extracted according to the first model, the object model which is located outside the range of the first model extraction view cone in the radiation field model is cut out according to the first model, and the target object model is obtained.
Drawings
FIG. 1 is a flowchart of a method for extracting a radiation field-based object model according to an embodiment of the present application;
FIG. 2 is a flowchart of an additional flow after a transparency process is performed on an object model outside a cone range extracted from the model in a radiation field-based object model extraction method according to another embodiment of the present application;
FIG. 3 is a flowchart of a method for extracting a radiation field-based object model according to another embodiment of the present application, wherein the method includes the steps of obtaining a first observation angle of a radiation field model;
fig. 4 is a flowchart of a method for extracting a radiation field-based object model according to another embodiment of the present application, where a first observation image corresponding to the first observation angle is obtained according to the radiation field model;
FIG. 5 is a flowchart of a method for extracting a target object model based on a radiation field according to another embodiment of the present application, where a first observation image corresponding to the first observation angle is obtained according to the radiation field model;
FIG. 6 is a flowchart of a method for extracting a view cone from a first model according to a back projection process performed on the first frame in accordance with another embodiment of the present application;
FIG. 7 is a flowchart of a clipping process for the object model located outside the first model extraction view cone range in the radiation field model in a radiation field-based object model extraction method according to another embodiment of the present application;
FIG. 8 is a flowchart illustrating an additional process after extracting a view cone from a first model in a method for extracting a radiation field-based object model according to another embodiment of the present application;
FIG. 9 is a schematic diagram illustrating extraction of a target object model according to another embodiment of the present application in a method for extracting a target object model based on a radiation field;
fig. 10 is a block diagram of a controller according to another embodiment of the present application.
Reference numerals: 900. a radiation field model; 910. a target object model; 920. a model of a non-target object; 931. the first observation angle corresponds to an observation point; 932. a first observation image; 933. a first target area; 934. extracting the edge of a viewing cone by a first model; 941. the second observation angle corresponds to the observation point; 942. a second observation image; 943. a second target area; 944. the second model extracts the viewing cone edge.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In some embodiments, although functional block division is performed in a system diagram, logical order is shown in a flowchart, in some cases, steps shown or described may be performed in a different order than block division in a system, or in a flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
In the prior art, modeling by using a radiation field model is a novel 3D modeling method, and the specific implementation principle is that a space is modeled by a series of 2D images in a certain scene, and modeling by using the radiation field model has the advantages of independent depth data, vividness in rendering and the like;
however, in the practical process of modeling by using the radiation field model, because the surrounding object is required to be shot, at this time, some scenes outside the object such as a wall surface, a ground surface and the like are inevitably shot, so that background structures such as a wall surface, a ground surface and the like are inevitably generated during modeling, and the background structures are models of non-target objects, so that the use of the radiation field model is affected, the image needs to be subjected to object segmentation processing, and each 2D image needs to be subjected to very precise segmentation, so that only the model of the target object can be effectively reflected in the scene model, but the accurate segmentation operation of all training images is very difficult to realize, time and labor are wasted, the generation speed in the radiation field model is seriously affected, and a user cannot effectively cut the non-target objects in the radiation field model, and further the target object model in the radiation field model cannot be effectively extracted.
In order to at least solve the problems, the application discloses a method, a system and a controller for extracting a target object model based on a radiation field, wherein the method for extracting the target object model based on the radiation field is used for obtaining a first observation angle of the radiation field model, the radiation field model comprises a plurality of object models, a first observation image corresponding to the first observation angle is obtained according to the radiation field model, a first frame selection frame corresponding to the object model in the first observation image is obtained, the object model is divided into the target object model and a non-target object model according to the first frame selection frame, back projection processing is carried out according to the first frame selection frame, a first model extraction viewing cone is obtained, the non-target object model outside the range of the first model extraction viewing cone in the radiation field model is cut, and the target object model is obtained.
Embodiments of the present application are further described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method for extracting a radiation field-based object model according to an embodiment of the present application, and in some embodiments, the method for extracting a radiation field-based object model includes, but is not limited to, the following steps S110, S120, S130, S140, and S150;
step S110, a first observation angle of a radiation field model is obtained, wherein the radiation field model comprises a plurality of object models;
in some embodiments, acquiring a first observation angle of the radiation field model includes acquiring the first observation angle input by a user through a graphical user interface, or concentrating any angle in a preset angle in a generation system of the radiation field model, or generating any angle according to the radiation field model by a target object model extraction system; under the condition that a first observation angle input by a user is obtained through a graphical user interface, the user can rotate and scale the radiation field model through the graphical user interface, and then the first observation angle of interest is selected, so that the steps of obtaining a first observation image and the like according to the first observation angle selected by the user in the subsequent process are facilitated.
In some embodiments, the first observation angle includes information such as an observation direction, an observation distance, and an observation view angle, the observation direction and the observation distance being used to determine a specific position of an external measurement point of the radiation field model, the observation direction representing a specific direction of the observation point relative to a center of the radiation field model, the observation distance representing a specific distance of the observation point relative to the center of the radiation field model, the specific position of the observation point and the observation view angle being used to generate the first observation image, the observation view angle representing a field of view angle range of the first observation image acquired at the observation point.
In some embodiments, the radiation field model in the present application refers to a radiation field model generated by a plurality of original pictures which are not segmented, where the radiation field model includes a plurality of object models, where the plurality of object models include a target object model of interest to a user and a non-target object model which needs to be cut, and the non-target object model includes background structures existing in some scenes such as wall surfaces and ground surfaces, where the radiation field model is a novel 3D modeling method, and models a space through a series of 2D images in a certain scene, and has advantages of independent depth data and vividness in rendering.
Step S120, obtaining a first observation image corresponding to a first observation angle according to a radiation field model;
in some embodiments, a first observation image corresponding to a first observation angle is obtained according to a radiation field model, the first observation image is a projection image of the radiation field model at the first observation angle, that is, the radiation field model is a 3D model, 2D rendering is performed on the radiation field model according to the first observation angle, so as to obtain a two-dimensional image representation of the radiation field model, and the first observation image may be understood as a shooting or observation picture obtained by a camera or a human eye observing the radiation field model at the first observation angle, where the first observation image may be a training image corresponding to the radiation field model, or a real-time rendering image generated by the radiation field model according to the first observation angle.
In an embodiment, the radiation field-based target object model extraction system includes a radiation field model generating component, where the radiation field model generating component may generate and determine coordinate information of an observation point corresponding to the first observation angle according to the first observation angle when acquiring the first observation angle of the radiation field model, where the coordinate information of the observation point is used in a subsequent step, and generate a first model extraction viewing cone according to the coordinate information of the observation point and three-dimensional coordinate information of the first frame.
In some embodiments, the radiation field-based target object model extraction system includes a radiation field model generating component, where, when a first observation angle input by a user is obtained through a graphical user interface, the user may rotate and scale the radiation field model through the graphical user interface and control an observation point corresponding to the first observation angle, and the radiation field model generating component obtains, through the graphical user interface, an observation point corresponding to the first observation angle, and coordinate information of the observation point is used in a subsequent step, and generates a first model extraction viewing cone according to the coordinate information of the observation point and three-dimensional coordinate information of a first frame.
Step S130, a first frame selection frame corresponding to an object model in a first observation image is obtained, wherein the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, and the first target image is an image obtained by projecting the object model on the first observation image;
in some embodiments, the first observation image includes a radiation field model projection of the radiation field model on the first observation image, where the radiation field model projection includes projections of each object model on the first observation image, so that the method acquires a first frame selection frame corresponding to the radiation field model projection in the first observation image, and further divides the object model into a target object model and a non-target object model according to the first frame selection frame, where the projection of the target object model on the first observation image is located inside the first frame selection frame, and the projection of the non-target object model on the first observation image is located outside the first frame selection frame.
In some embodiments, the radiation field-based target object model extraction system includes a radiation field model generating component and a corresponding graphical user interface, the first frame may be polygon information input by a user through the graphical user interface, the radiation field-based target object model extraction system may acquire the polygon information through the graphical user interface, a polygon corresponding to the polygon information encloses a projection on the first observation image, and further an internal range corresponding to the polygon information is determined as a target area, and an object model in the target area is determined as a target object model.
In some embodiments, the first frame selection border may be a cutting result of the first observation image by using an existing model cutting algorithm, the object model is divided into the target object model and the non-target object model according to the first frame selection border, the projection of the target object model on the first observation image is located in the first frame selection border, the projection of the non-target object model on the first observation image is located outside the first frame selection border, and it is conceivable that compared with the prior art that cutting is performed on all training images for generating the radiation field model, cutting is performed only on the observation image to obtain the first frame selection border representing the cutting result, and then cutting is performed on the non-target area according to the first frame selection border in a subsequent step, so that the target object model can be extracted more simply and effectively, and time and labor are saved.
Step S140, performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone;
in some embodiments, the model core formed by radiation field modeling can be regarded as a vector field, and can be expressed simply as (o, c) =f (p, v). Where p represents a coordinate point in space, the dimension is 3, v represents a direction, and the dimension is 2.o represents the opacity of the spatial p-point and is represented by a numerical value. c represents the color of the point p observed from the v direction and has three components of RGB, so that the back projection processing according to the first frame selection frame means that the three-dimensional coordinate information of the first frame selection frame in the radiation field 3D modeling space is determined according to the two-dimensional coordinate information of the first frame selection frame in the first observation image and the three-dimensional coordinate information of the corresponding plane of the first observation image in the radiation field 3D modeling space, and further the back projection processing of the first frame selection frame is realized, wherein the method for determining the three-dimensional coordinate information of the first frame selection frame in the radiation field 3D modeling space according to the two-dimensional coordinate information of the first frame selection frame in the first observation image is an existing 3D space coordinate conversion or mapping method, and the essence of the application is not limited.
In some embodiments, the three-dimensional coordinate information of the first frame in the radiation field 3D modeling space includes a plurality of three-dimensional coordinate points, the three-dimensional coordinate points are connected to form a three-dimensional space representation of the first frame, a view cone corresponding to the first frame is defined as a set of points in space, the points are required to be projected and then fall in a frame selection area of the first frame, specifically, back projection processing is performed according to the first frame, the view cone extraction method includes obtaining a first observation angle corresponding observation point under the condition that the three-dimensional coordinate information of the first frame in the radiation field 3D modeling space is determined, determining a range of the first model extraction view cone according to the three-dimensional coordinate information of the first observation angle corresponding observation point in the radiation field 3D modeling space and the three-dimensional coordinate information of the first frame in the radiation field 3D modeling space, specifically, taking the first observation angle corresponding observation point as an origin, and a plurality of rays corresponding to the first frame corresponding to the observation point and the first frame, wherein the multiple ray range of the first model extraction method is a cone, namely, the multiple ray range of the multiple ray corresponding to the first frame extraction method is a cone.
In some embodiments, the first model extracts the view cone representation and uses the first observation angle to correspond to the observation point, the first frame selects the frame as the observation range corresponding to the observation boundary, the object model is divided into the target object model and the non-target object model according to the first frame selects the frame, the projection of the target object model on the first observation image is positioned in the first frame selects the frame, the projection of the non-target object model on the first observation image is positioned outside the first frame selects the frame, the non-target object model is subjected to clipping treatment or non-visualization treatment in the subsequent step, the same effect as cutting can be achieved only by retaining the target object model, the work of cutting a plurality of pictures is further saved, and the work efficiency is improved.
And step S150, extracting a viewing cone according to the first model, and cutting an object model which is positioned outside the range of the first model extracting the viewing cone in the radiation field model to obtain a target object model.
In some embodiments, after the radiation field modeling is formed, the color observed at any point p in the radiation field along a certain direction v is the color integral over the rays emerging from p along the v direction, which integral needs to take into account the occlusion effect of the opacity. More specifically, the integral formula of the observation color (where the integral range is the modeling range specified by the radiation field model in the present application) is:
In some embodiments, the object model outside the view cone range extracted by the first model is a non-target object model, and since the opacity of the radiation field model generally contains abundant geometric structure information, there is generally a large interval between the target object and the background object, and the interval is not required to be very accurate when a user sets the target polygon to be close to the object surface, the step of clipping the non-target object model outside the view cone range extracted by the first model in the radiation field model includes: the method comprises the steps of performing transparency processing on a non-target object model outside the first model extraction viewing cone range in a radiation field model, setting the opacity of the non-target object model outside the first model extraction viewing cone range to be 0, and effectively avoiding the problems that in the prior art, in order to extract the object model, all training images need to be accurately segmented at a pixel level, usually hundreds of training images are usually required, and the operation is very inconvenient.
Referring to fig. 2, fig. 2 is a flowchart of an additional flow after performing a transparency process on an object model outside a cone range of a model extraction view in a radiation field model in a target object model extraction method based on a radiation field according to another embodiment of the present application, and in some embodiments, after performing a transparency process on a non-target object model outside a cone range of a model extraction view in a radiation field model according to a first model extraction view, the method further includes, but is not limited to, the following steps S210, S220, S230, S240, and S250:
step S210, obtaining a second observation angle of the radiation field model;
step S220, obtaining a second observation image corresponding to a second observation angle according to the radiation field model;
step S230, a second frame selection frame corresponding to the object model in the second observation image is obtained, the second frame selection frame is used for selecting a second target image on the second observation image in a frame mode, and the second target image is an image obtained by projecting the target object model on the second observation image;
step S240, performing back projection processing on the second frame selection frame to obtain a second model extraction viewing cone;
and S250, extracting a viewing cone according to the second model, and cutting an object model which is positioned outside the range of the second model extracting the viewing cone in the radiation field model.
In some embodiments, after steps S210, S220, S230, S240, and S250 are performed in steps S110 to S140, it is conceivable that the non-target object model is cut at one observation angle through steps S110 to S140, but the radiation field model obtained by cutting the non-target object model at one observation angle may not reach the user requirement or the non-target object model is not cut completely, so after steps S110 to S140, steps S210 to S250 are performed, the observation angle is obtained again, the cutting process is performed again on the basis of the cut radiation field model, the process is as follows, a second observation angle of the radiation field model is obtained, the second observation angle corresponds to the second observation point, a second frame selection frame corresponding to the object model in the second observation image is obtained according to the radiation field model, the object model is divided into a target area and a non-target area according to the second frame selection frame, the second frame selection frame is the inner area, namely, the non-target object model is located at the position of the non-target cone, the non-target cone is located at the outer side of the second frame, the non-target cone is further extracted according to the projection cone, the non-target cone model is further extracted, and the non-target cone object model is located at the position of the non-target cone is further extracted according to the projection cone, and the non-target cone object model is further extracted, and the non-target cone is further extracted.
In some embodiments, the obtaining of the second observation angle and the obtaining of the second frame selection border are consistent with the obtaining of the first observation angle and the obtaining of the first frame selection border, and the obtaining of the first observation angle input by a user through a graphical user interface, or any angle in a preset angle set in a generating system of the radiation field model, or any angle generated by a target object model extracting system according to the radiation field model, and the obtaining of the frame selection border through the graphical user interface, or the obtaining of the frame selection border according to an existing image cutting algorithm are included.
In some embodiments, the application can cut the radiation field model for a plurality of times according to the requirement of a user, and the specific process is as follows,
1) The user selects an observation angle, and the modeling system gives out a corresponding observation image;
2) The user selects an object of interest through the polygonal frame;
3) The system records polygon information selected by a user, back projects the polygon to space, calculates a viewing cone formed by the polygon, and sets the opacity outside the viewing cone to 0;
4) The system renders the cut image to the user;
5) The user changes an observation angle, and the steps 2), 3) and 4) are repeated until the extraction of the wanted object is finished, wherein the scheme of the application provides the user with the function of cutting on the rendering result, does not need to accurately divide the object in advance, and simply performs a plurality of steps of screening after the training is finished, so that the method is simple and effective.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for extracting a radiation field-based object model according to another embodiment of the present application, in which a first observation angle of a radiation field model is obtained, and in some embodiments, the method includes, but is not limited to, the following steps S310, S320 and S330:
step S310, acquiring a preset training image set for generating a radiation field model;
step S320, a preset angle set corresponding to a preset training image set is obtained;
step S330, obtaining a target preset angle from the preset angle set, and determining the target preset angle as a first observation angle.
In some embodiments, acquiring the first observation angle of the radiation field model may be a first observation angle determined according to an input of a user in a graphical user interface, or may be a preset angle provided by the radiation field model generating component for the user, where the first observation angle is a preset angle provided by the radiation field model generating component for the user, acquiring the first observation angle of the radiation field model includes acquiring a preset training image set for generating the radiation field model, acquiring a preset angle set corresponding to the preset training image set, obtaining a target preset angle from the preset angle set, and determining the target preset angle as the first observation angle.
Referring to fig. 4, fig. 4 is a flowchart of a method for extracting a target object model based on a radiation field according to another embodiment of the present application, wherein the method includes steps of, but is not limited to, steps S410 and S420:
step S410, obtaining a target training image from a preset training image set according to a target preset angle;
step S420, determining the target training image as the first observation image.
In some embodiments, the first observation angle of the radiation field model may be a first observation angle determined according to the input of the user in the graphical user interface, or may be a preset angle provided by the radiation field model generating component for the user, where the first observation angle is a first observation angle determined according to the input of the user in the graphical user interface, the target object model extraction system may perform comparison with the first observation angle from a preset training image set corresponding to the radiation field model, and determine the training image as the target training image when the observation angle corresponding to the training image in the preset training image set is consistent with the first observation angle, and further determine the target training image as the first observation image.
In some embodiments, the first observation angle of the radiation field model may be a first observation angle determined according to an input of a user in a graphical user interface, or may be a preset angle provided by the radiation field model generating component for the user, where the first observation angle is a preset angle provided by the radiation field model generating component for the user, and the preset angle is a corresponding angle of any training image in a preset training image set, so in this case, a target training image is obtained from the preset training image set according to a target preset angle, and the target training image is determined as the first observation image, so that the method for extracting the target object model based on the radiation field can be flexibly and effectively according to the flexibility.
Referring to fig. 5, fig. 5 is a flowchart of a method for extracting a target object model based on a radiation field according to another embodiment of the present application, where a first observation image corresponding to a first observation angle is obtained according to a radiation field model, and in some embodiments, the method includes, but is not limited to, the following steps S510 and S520:
step S510, performing rendering treatment on the radiation field model according to a first observation angle to obtain a rendered image;
step S520, the rendered image is determined as the first observation image.
In some embodiments, the first observation angle of the radiation field model may be obtained, where the first observation angle may be a first observation angle determined according to an input of a user in a graphical user interface, or may be a preset angle provided by the radiation field model generating component for the user, where the first observation angle is a first observation angle determined according to an input of a user in a graphical user interface, the target object model extraction system may perform, from a preset training image set, a comparison with the first observation angle corresponding to the radiation field model, and where the observation angle corresponding to the training image does not exist in the preset training image set, and where the first observation angle corresponds to the first observation angle, the target object model extraction system may perform, through the radiation field model generating component, rendering processing on the radiation field model according to the first observation angle, to obtain a rendered image, and determine the rendered image as the first observation image, and may enable the target object model extraction method based on the radiation field to be flexible and effective.
Referring to fig. 6, fig. 6 is a flowchart of a first model extraction view cone obtained by performing back projection processing according to a first frame selection frame in a method for extracting a target object model based on a radiation field according to another embodiment of the present application, and the first model extraction view cone is obtained by performing back projection processing according to the first frame selection frame, including but not limited to the following steps S610, S620 and S630:
Step S610, obtaining the position information of the observation point according to the first observation image;
step S620, a view cone edge is obtained according to the position information of the observation point and the first frame selection frame;
step S630, a first model is obtained according to the video cone edge to extract the video cone.
In some embodiments, the position information of the observation point is obtained according to the first observation image, the view cone edge is obtained according to the position information of the observation point and the first frame selection frame, the first model extraction view cone is obtained according to the view cone edge, wherein the first model extraction view cone represents the observation point corresponding to the first observation angle, the first frame selection frame is the observation range corresponding to the observation boundary, the three-dimensional coordinate information of the first frame selection frame in the radiation field 3D modeling space comprises a plurality of three-dimensional coordinate points, the three-dimensional coordinate points are connected to form the three-dimensional space representation of the first frame selection frame, the view cone corresponding to the first frame selection frame is defined as a set of points in the space, the points are required to be projected and fall in the frame selection area of the first frame selection frame, in particular, the observation point corresponding to the first observation angle is taken as an origin, as a plurality of rays of a plurality of three-dimensional coordinate points corresponding to the observation point corresponding to the first frame selection border by the first observation angle are taken, the plurality of rays are a plurality of edges of the first model extraction viewing cone, namely, a cone corresponding range surrounded by the plurality of edges is the first model extraction viewing cone corresponding range, it is conceivable that the object model is divided into the target object model and the non-target object model according to the first frame selection border in the application, the projection of the target object model on the first observation image is positioned in the first frame selection border, the projection of the non-target object model on the first observation image is positioned outside the first frame selection border, so that the viewing cone formed by connecting the edges of the first frame selection border and the observation point can effectively divide the target object model and the non-target object model after the first frame selection border is back projected to the radiation field model, the non-target object model is cut, and extracting a target object model.
Referring to fig. 7, fig. 7 is a flowchart illustrating a method for extracting a target object model based on a radiation field according to another embodiment of the present application, in which a non-target object model of a radiation field model located outside a range of a first model extraction viewing cone is clipped, in some embodiments, an object model of a radiation field model located outside the range of the first model extraction viewing cone is clipped according to the first model extraction viewing cone, including but not limited to the following steps S710 and S720:
step S710, according to the first model extraction viewing cone, setting the opacity of an object model outside the range of the model extraction viewing cone in the radiation field model to 0;
step S720, or, extracting a view cone according to the first model, and reducing the modeling area of the radiation field model to a model area corresponding to the view cone extracted by the first model.
In some embodiments, the object model outside the range of the first model extraction viewing cone is a non-target object model, according to the first model extraction viewing cone, the opacity of the non-target object model outside the range of the model extraction viewing cone in the radiation field model is set to 0, or the transparency of the non-target object model outside the range of the model extraction viewing cone in the radiation field model is set to 100, so that the transparency of the non-target object model can be realized, after a user determines a target area corresponding to a target object, in order to achieve the purpose of only the target object, the modeling space range can be unchanged, but the opacity of the non-target area corresponding to the non-target object is set to zero, so that the method is simpler and more effective.
In some embodiments, the view cone is extracted according to the first model, and the modeling area of the radiation field model is reduced to the model area corresponding to the view cone extracted by the first model, so that after the user determines the target area, in order to achieve the purpose of only displaying the object, the modeling range can be directly defined on the area defined by the user, and the opacity is not changed, so that the method is more flexible and effective.
Referring to fig. 8, fig. 8 is a flowchart of an additional process after a first model is extracted from a view cone in a method for extracting a target object model based on a radiation field according to another embodiment of the present application, and in some embodiments, after a back projection process is performed according to a first frame selection frame to obtain a first model extracted from a view cone, the method further includes, but is not limited to, the following steps S810, S820 and S830:
step S810, determining a region in the radiation field model, which is positioned in the range of the first model extraction viewing cone, as a target region;
step S820, determining a region, which is located outside the first model extraction viewing cone range, in the radiation field model as a non-target region;
step S830, clipping the non-target area model in the radiation field model to obtain a target area model in the radiation field model, and determining the target area model as a target object model.
In some embodiments, an area, located in a first model extraction view cone range, of a radiation field model is determined as a target area, an area, located outside the first model extraction view cone range, of the radiation field model is determined as a non-target area, a non-target area model of the radiation field model is cut to obtain a target area model of the radiation field model, and the target area model is determined as a target object model, wherein the area, located outside the first model extraction view cone range, of the radiation field model is directly determined as the non-target area, and the non-target area is directly subjected to the transparency processing, instead of the transparency processing, of a non-target object, background structures such as a wall surface and a ground surface are also inevitably restored when modeling is effectively removed, so that the target object model at the extraction position is more attractive and complete, and the use experience of a user is improved.
In some embodiments, the observation angles of the radiation field model include an X-axis projection angle, a Y-axis projection angle and a Z-axis projection angle, and it is conceivable that after the radiation field model is trained, a user selects a target object to the target object model through the radiation field model, at this time, the system provides the user with rendering views of three projection angles, and the three projection angles include the X-axis projection angle, the Y-axis projection angle and the Z-axis projection angle, so that under the condition that the user does not know how to select the angle or the target object model has a simple structure, the observation angle can be automatically provided for the user, and the extraction processing of the target object model is realized according to simple and efficient implementation, so as to achieve the purpose of extraction.
Referring to fig. 9, fig. 9 is a schematic diagram of extracting a target object model in a radiation field-based target object model extraction method according to another embodiment of the present application, where, referring to fig. 9 (1), fig. 9 (1) is a schematic diagram of a radiation field model 900 in an implementation process of the radiation field-based target object model extraction method, and fig. 9 (2) is a schematic diagram of an extracted target object model 910 after the radiation field-based target object model extraction method is implemented, which illustrates an example that a middle hexagon is a target object and surrounding unwanted interference backgrounds, and when a user selects a target area at a view angle corresponding to an observation point 931 corresponding to a first observation angle, a space is cut once, a space range where the object is located is reduced, and then the space is selected once at a view angle corresponding to an observation point 941 corresponding to a second observation angle, so that the target object can be enclosed, and surrounding interference can be removed.
Specifically, referring to fig. 9 (1), a plurality of non-target object models 920 exist around the target object model 910, where the non-target object models 920 are background structures such as wall surfaces and ground surfaces that occur during modeling due to shooting around objects in the actual process of modeling by using the radiation field model, and the purpose of the present application is to cut out the area where the non-target object models 920 or the non-target object models 920 are located, so that only the target object models 910 are left or only the radiation field model 900 can be observed, and the effect of the extracted target object models 910 is achieved.
In some embodiments, referring to fig. 9 (1), a first observation angle of a radiation field model is obtained, a first observation point 931 corresponding to the first observation angle is obtained according to the radiation field model, a first observation image 932 corresponding to the first observation angle is obtained according to the radiation field model, a first frame corresponding to an object model in the first observation image is obtained, the object model is divided into a target object model 933 and a non-target object model according to the first frame, an internal area of the first frame is the target object model 933, a projection of the corresponding target object model on the first observation image is located in the target object model 933, a projection of the non-target object model on the first observation image is located in the non-target object model, a view cone edge 934 is obtained according to the observation point position information and the first frame, a view cone is extracted according to the view cone edge, and then a clipping process is performed on the non-target object model located outside the view cone extraction range of the first model in the radiation field model, where the clipping process includes, but is not limited to: the opacity of a non-target object model outside the range of the model extraction viewing cone in the radiation field model is set to 0 according to the first model extraction viewing cone, or the modeling area of the radiation field model is reduced to a model area corresponding to the first model extraction viewing cone according to the first model extraction viewing cone, so that the training image is not required to be segmented, time and labor are saved, and the target object model in the radiation field model can be simply and effectively extracted.
In contrast, after the clipping processing is performed, if the user is dissatisfied or fails to clip the non-target object model, the observation angle can be obtained again, the clipping processing is performed again on the basis of the clipped radiation field model, the process is as follows, the second observation angle of the radiation field model is obtained, the second observation point 941 corresponding to the second observation angle is obtained according to the radiation field model, the second observation image 942 corresponding to the second observation angle is obtained according to the radiation field model, the second frame selection frame corresponding to the object model in the second observation image is obtained, the object model is divided into the target object model 943 and the non-target object model according to the second frame selection frame, the internal area of the second frame selection frame is the target object model 943, the projection of the corresponding target object model on the second observation image is located on the target object model 943, the projection of the non-target object model on the second observation image is located on the non-target object model, the view cone edge 944 is obtained according to the position information of the observation point and the second frame selection, the view cone is obtained according to the view cone edge, further the non-target object model is extracted until the non-target object model is further removed from the periphery of the non-target object model, and the non-target object is further removed.
In some embodiments, the present application further proposes a radiation field-based object model extraction system, the system comprising: the angle acquisition module is used for acquiring a first observation angle of a radiation field model, wherein the radiation field model comprises a plurality of object models; the image generation module is used for obtaining a first observation image corresponding to the first observation angle according to the radiation field model; the frame acquisition module is used for acquiring a first frame selection frame corresponding to the object model in the first observation image, wherein the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, and the first target image is an image obtained by projecting the target object model on the first observation image; the view cone generating module is used for performing back projection processing according to the first frame selection frame to obtain a first model extraction view cone; and the cutting processing module is used for extracting the viewing cone according to the first model, and cutting the object model which is positioned outside the range of the first model for extracting the viewing cone in the radiation field model to obtain the target object model.
The target object model extraction system obtains a frame selection frame through an observation angle and an observation image, and obtains a model extraction viewing cone according to the frame selection frame, so that a non-target object model positioned outside the range of the model extraction viewing cone in the radiation field model is cut, all training images are not required to be accurately segmented, and the target object model in the radiation field model can be simply and effectively extracted.
In some embodiments, a target object model extraction device is further provided, where the target object model extraction device is provided with the target object model extraction system according to any one of the above embodiments, so that the target object model extraction device has the function and the effect of the target object model extraction method based on the radiation field according to any one of the above embodiments.
Fig. 10 is a schematic structural diagram of a controller according to an embodiment of the present invention.
Some embodiments of the present invention provide a controller including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the radiation field based object model extraction method of any of the above embodiments when executing the computer program, for example, performing the method steps S110 to S150 in fig. 1, the method steps S210 to S250 in fig. 2, the method steps S310 to S330 in fig. 3, the method steps S410 to S420 in fig. 4, the method steps S510 to S520 in fig. 5, the method steps S610 to S630 in fig. 6, the method steps S710 to S720 in fig. 7, and the method steps S810 to S830 in fig. 8 described above.
The controller 1000 of the present embodiment includes one or more processors 1010 and a memory 1020, one processor 1010 and one memory 1020 being illustrated in fig. 10.
The processor 1010 and the memory 1020 may be connected by a bus or otherwise, for example in fig. 10.
Memory 1020 is a non-transitory computer readable storage medium that may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, memory 1020 may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 1020 optionally includes memory 1020 located remotely from processor 1010, which may be connected to controller 1000 via a network, examples of which include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In some embodiments, the processor executes the computer program to perform the method for extracting a radiation field-based object model according to any one of the above embodiments at preset intervals.
Those skilled in the art will appreciate that the device structure shown in fig. 10 is not limiting of the controller 1000 and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
In the controller 1000 shown in fig. 10, the processor 1010 may be configured to invoke the radiation field based target object model extraction method stored in the memory 1020, thereby implementing the radiation field based target object model extraction method.
Based on the hardware structure of the controller 1000 described above, various embodiments of the target object model extraction system of the present invention are presented, while non-transitory software programs and instructions required to implement the radiation field-based target object model extraction method of the above embodiments are stored in a memory, which when executed by a processor, perform the radiation field-based target object model extraction method of the above embodiments.
In addition, the embodiment of the invention also provides a target object model extraction system, which comprises the controller.
In some embodiments, since the target object model extraction system of the embodiment of the present invention has the controller of the embodiment and the controller of the embodiment is capable of executing the radiation field-based target object model extraction method of the embodiment, the specific implementation and technical effects of the target object model extraction system of the embodiment of the present invention may refer to the specific implementation and technical effects of the radiation field-based target object model extraction method of any of the embodiments.
The embodiment of the present invention further provides a computer-readable storage medium storing computer-executable instructions for performing the above-described radiation field-based target object model extraction method, for example, the one or more processors may be caused to perform the radiation field-based target object model extraction method in the above-described method embodiment, for example, the above-described method steps S110 to S150 in fig. 1, the method steps S210 to S250 in fig. 2, the method steps S310 to S330 in fig. 3, the method steps S410 to S420 in fig. 4, the method steps S510 to S520 in fig. 5, the method steps S710 to S720 in fig. 6, and the method steps S810 to S830 in fig. 8.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network nodes. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer readable storage media (or non-transitory media) and communication media (or transitory media). The term computer-readable storage medium includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
While the preferred embodiment of the present application has been described in detail, the present application is not limited to the above embodiment, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present application, and these equivalent modifications or substitutions are included in the scope of the present application as defined in the appended claims.

Claims (11)

1. A method for extracting a target object model based on a radiation field, the method comprising:
acquiring a first observation angle of a radiation field model, wherein the radiation field model comprises a plurality of object models;
obtaining a first observation image corresponding to the first observation angle according to the radiation field model;
acquiring a first frame selection frame corresponding to the object model in the first observation image, wherein the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, and the first target image is an image obtained by projecting the target object model on the first observation image;
performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone;
according to the first model extraction view cone, cutting the object model, which is located outside the range of the first model extraction view cone, in the radiation field model to obtain the target object model;
The method for clipping the object model, which is located outside the range of the first model extraction viewing cone in the radiation field model, comprises the following steps: according to the first model extraction view cone, setting the opacity of the object model outside the range of the model extraction view cone in the radiation field model to 0; or, extracting a view cone according to the first model, and reducing the modeling area of the radiation field model to a model area corresponding to the view cone extracted by the first model.
2. The method according to claim 1, wherein after the object model of the radiation field model located outside the range of the first model extraction viewing cone is cut, the method further comprises:
acquiring a second observation angle of the radiation field model;
obtaining a second observation image corresponding to the second observation angle according to the radiation field model;
acquiring a second frame selection frame corresponding to the object model in the second observation image, wherein the second frame selection frame is used for selecting a second target image on the second observation image in a frame mode, and the second target image is an image obtained by projecting the target object model on the second observation image;
Performing back projection processing on the second frame selection frame to obtain a second model extraction viewing cone;
and extracting a viewing cone according to the second model, and cutting the object model which is positioned outside the range of the second model extraction viewing cone in the radiation field model.
3. The method of claim 1, wherein the obtaining a first observation angle of the radiation field model comprises:
acquiring a preset training image set for generating the radiation field model;
acquiring a preset angle set corresponding to the preset training image set;
and obtaining a target preset angle from the preset angle set, and determining the target preset angle as the first observation angle.
4. A method of extracting a radiation field based object model according to claim 3, wherein the obtaining a first observation image corresponding to the first observation angle according to the radiation field model includes:
obtaining a target training image from the preset training image set according to the target preset angle;
the target training image is determined as the first observation image.
5. The method for extracting a radiation field-based object model according to claim 1, wherein the obtaining a first observation image corresponding to the first observation angle according to the radiation field model includes:
Rendering the radiation field model according to the first observation angle to obtain a rendered image;
the rendered image is determined as the first observation image.
6. The method for extracting a radiation field-based object model according to claim 1, wherein the performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone comprises:
obtaining the position information of the observation point according to the first observation image;
obtaining a viewing cone edge according to the position information of the observation point and the first frame selection frame;
and obtaining a first model to extract the viewing cone according to the viewing cone edge.
7. The method for extracting a target object model based on a radiation field according to claim 1, wherein the performing back projection processing according to the first frame selection frame to obtain a first model extraction viewing cone further comprises:
determining a region, which is located in the first model extraction view cone range, in the radiation field model as a target region;
determining a region, which is positioned outside the first model extraction viewing cone range, in the radiation field model as a non-target region;
and cutting the non-target area model in the radiation field model to obtain a target area model in the radiation field model, and determining the target area model as the target object model.
8. The method of claim 1, wherein the observation angles of the radiation field model include an X-axis projection angle, a Y-axis projection angle, and a Z-axis projection angle.
9. A radiation field-based object model extraction system, the system comprising:
the angle acquisition module is used for acquiring a first observation angle of a radiation field model, wherein the radiation field model comprises a plurality of object models;
the image generation module is used for obtaining a first observation image corresponding to the first observation angle according to the radiation field model;
the frame acquisition module is used for acquiring a first frame selection frame corresponding to the object model in the first observation image, wherein the first frame selection frame is used for selecting a first target image on the first observation image in a frame mode, and the first target image is an image obtained by projecting the target object model on the first observation image;
the view cone generating module is used for performing back projection processing according to the first frame selection frame to obtain a first model extraction view cone;
the clipping processing module is used for extracting a viewing cone according to the first model, clipping the object model which is positioned outside the range of the first model extraction viewing cone in the radiation field model, and obtaining the target object model;
The method for clipping the object model, which is located outside the range of the first model extraction viewing cone in the radiation field model, comprises the following steps: according to the first model extraction view cone, setting the opacity of the object model outside the range of the model extraction view cone in the radiation field model to 0; or, extracting a view cone according to the first model, and reducing the modeling area of the radiation field model to a model area corresponding to the view cone extracted by the first model.
10. A controller comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the radiation field based object model extraction method according to any one of claims 1 to 8 when the computer program is executed.
11. A computer-readable storage medium storing computer-executable instructions for performing the radiation field-based object model extraction method according to any one of claims 1 to 8.
CN202211590074.6A 2022-12-12 2022-12-12 Method, system and controller for extracting target object model based on radiation field Active CN115984458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211590074.6A CN115984458B (en) 2022-12-12 2022-12-12 Method, system and controller for extracting target object model based on radiation field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211590074.6A CN115984458B (en) 2022-12-12 2022-12-12 Method, system and controller for extracting target object model based on radiation field

Publications (2)

Publication Number Publication Date
CN115984458A CN115984458A (en) 2023-04-18
CN115984458B true CN115984458B (en) 2023-10-03

Family

ID=85965712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211590074.6A Active CN115984458B (en) 2022-12-12 2022-12-12 Method, system and controller for extracting target object model based on radiation field

Country Status (1)

Country Link
CN (1) CN115984458B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109920048A (en) * 2019-02-15 2019-06-21 北京清瞳时代科技有限公司 Monitored picture generation method and device
CN113139453A (en) * 2021-04-19 2021-07-20 中国地质大学(武汉) Orthoimage high-rise building base vector extraction method based on deep learning
CN113902887A (en) * 2021-11-01 2022-01-07 江西博微新技术有限公司 Three-dimensional visual edge generation method, system, computer and readable storage medium
CN114298151A (en) * 2021-11-19 2022-04-08 安徽集萃智造机器人科技有限公司 3D target detection method based on point cloud data and image data fusion
CN115359195A (en) * 2022-07-18 2022-11-18 北京建筑大学 Orthoimage generation method and device, storage medium and electronic equipment
CN115457188A (en) * 2022-09-19 2022-12-09 遥在(山东)数字科技有限公司 3D rendering display method and system based on fixation point

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11462023B2 (en) * 2019-11-14 2022-10-04 Toyota Research Institute, Inc. Systems and methods for 3D object detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109920048A (en) * 2019-02-15 2019-06-21 北京清瞳时代科技有限公司 Monitored picture generation method and device
CN113139453A (en) * 2021-04-19 2021-07-20 中国地质大学(武汉) Orthoimage high-rise building base vector extraction method based on deep learning
CN113902887A (en) * 2021-11-01 2022-01-07 江西博微新技术有限公司 Three-dimensional visual edge generation method, system, computer and readable storage medium
CN114298151A (en) * 2021-11-19 2022-04-08 安徽集萃智造机器人科技有限公司 3D target detection method based on point cloud data and image data fusion
CN115359195A (en) * 2022-07-18 2022-11-18 北京建筑大学 Orthoimage generation method and device, storage medium and electronic equipment
CN115457188A (en) * 2022-09-19 2022-12-09 遥在(山东)数字科技有限公司 3D rendering display method and system based on fixation point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
建筑信息模型与3D WebGIS平台集成方法研究;张孝勇;沈蕊;温丹琪;;测绘(第04期);全文 *

Also Published As

Publication number Publication date
CN115984458A (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US10855909B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US11798224B2 (en) Generation apparatus, system and method for generating virtual viewpoint image
US9070222B2 (en) Techniques for automating stereo settings for stereoscopic computer animation
US20210134049A1 (en) Image processing apparatus and method
KR20190034092A (en) Image processing apparatus, image processing method, image processing system, and storage medium
CN107798702B (en) Real-time image superposition method and device for augmented reality
US20230419438A1 (en) Extraction of standardized images from a single-view or multi-view capture
KR20180054487A (en) Method and device for processing dvs events
CN107798704B (en) Real-time image superposition method and device for augmented reality
US9253415B2 (en) Simulating tracking shots from image sequences
US10708505B2 (en) Image processing apparatus, method, and storage medium
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
JP7479729B2 (en) Three-dimensional representation method and device
CN109064533B (en) 3D roaming method and system
CN109902675B (en) Object pose acquisition method and scene reconstruction method and device
US11798233B2 (en) Generation device, generation method and storage medium for three-dimensional model that remove a portion of the three-dimensional model
US20230394832A1 (en) Method, system and computer readable media for object detection coverage estimation
US20140098246A1 (en) Method, Apparatus and Computer-Readable Recording Medium for Refocusing Photographed Image
CN115984458B (en) Method, system and controller for extracting target object model based on radiation field
Liu et al. Stereo-based bokeh effects for photography
CN116468736A (en) Method, device, equipment and medium for segmenting foreground image based on spatial structure
US20230152883A1 (en) Scene processing for holographic displays
US20220230337A1 (en) Information processing apparatus, information processing method, and storage medium
KR20060021566A (en) Reconstruction 3d scene of virtual viewpoint using foreground projection-image and virtual-shap

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant