CN114338981B - Automatic range-finding area and volume camera for experiment - Google Patents

Automatic range-finding area and volume camera for experiment Download PDF

Info

Publication number
CN114338981B
CN114338981B CN202111576980.6A CN202111576980A CN114338981B CN 114338981 B CN114338981 B CN 114338981B CN 202111576980 A CN202111576980 A CN 202111576980A CN 114338981 B CN114338981 B CN 114338981B
Authority
CN
China
Prior art keywords
unit
area
target object
model
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111576980.6A
Other languages
Chinese (zh)
Other versions
CN114338981A (en
Inventor
田莎
田雪飞
郭垠梅
吴若霞
黄晓蒂
周青
符嘉骏
梁子成
胡玉星
冯婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Chinese Medicine
Original Assignee
Hunan University of Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Chinese Medicine filed Critical Hunan University of Chinese Medicine
Priority to CN202111576980.6A priority Critical patent/CN114338981B/en
Publication of CN114338981A publication Critical patent/CN114338981A/en
Application granted granted Critical
Publication of CN114338981B publication Critical patent/CN114338981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses an automatic range-finding standard area and volume camera for experiments, which comprises a movable trolley body, wherein the top of the movable trolley body is fixedly connected with a movable rail frame, the top of the movable rail frame is slidably connected with a movable sleeve, the top of the movable sleeve is fixedly connected with an electric push rod, the top of the electric push rod is fixedly connected with a substrate, and the top of the substrate is rotationally connected with a rotary table. According to the application, the object is highlighted and a model is built through the image acquisition system, the image processing system and the image display system, the object entity model length, the projection view area, the appearance area and the volume are calculated according to actual needs, the model proportion is calculated according to actual reference point data, various data of an actual object are obtained and displayed on the image display screen, meanwhile, the transparent grid marks are arranged on the image display screen, and the size, the perspective area, the three-dimensional body surface area and the volume of the object are intuitively displayed and have strong readability, so that the manual workload is reduced, and the measurement error is reduced.

Description

Automatic range-finding area and volume camera for experiment
Technical Field
The application relates to the field of photographic devices, in particular to an automatic range-finding area and volume camera for experiments.
Background
The forms of the eye-sight tumor are various, and the benign and malignant tumor can be reflected to a certain extent.
In the prior art, the observation of the subcutaneous transplantation tumor can be carried out by eye observation or shooting of the subcutaneous transplantation tumor by a camera, and the area and the volume are measured and calculated by using a ruler after the picture is shot, but larger errors exist, and the measurement of the tumor size in an irregular shape is very inconvenient.
Therefore, it is necessary to invent an automatic range-finding area-volume camera for experiments to solve the above-mentioned problems.
Disclosure of Invention
The application aims to provide an automatic remote-control area and volume camera for experiments, which is used for solving the problems of larger error in calculation and extremely inconvenient measurement of tumor sizes in irregular shapes in the prior art.
In order to achieve the above object, the present application provides the following technical solutions: the automatic range finding standard area and volume camera for experiments comprises a mobile car body, wherein the top of the mobile car body is fixedly connected with a mobile rail frame, the top of the mobile rail frame is slidably connected with a mobile sleeve, the top of the mobile sleeve is fixedly connected with an electric push rod, the top of the electric push rod is fixedly connected with a substrate, the top of the substrate is rotationally connected with a rotary table, the axis of the rotary table is fixedly connected with a centering shaft, the top of the centering shaft is detachably connected with a high-definition camera, the connecting end of the high-definition camera is provided with an image acquisition system, an image processing system and an image display system, the image acquisition system and the image display system are respectively arranged at the input end and the output end of the image processing system, the image acquisition system is used for acquiring a shot picture of the high-definition camera, the shooting end of the high-definition camera is provided with an electric zoom lens, one side of the high-definition camera is also provided with a cross cursor positioner, the image processing system comprises a picture processing module, a model building module and a size calculating module, and the connecting end of the image display system is provided with a data deriving module;
the image processing module is used for processing the shot high-definition image to obtain a target object area, and comprises an image acquisition unit, a grid reference adaptation unit, a target object identification marking unit and an area approval unit, wherein the image acquisition unit is used for receiving an electronic image shot by the high-definition camera, and the image acquisition unit, the grid reference adaptation unit, the target object identification marking unit and the area approval unit are sequentially connected and sequentially process the electronic image;
the model construction module is used for carrying out reference quantity equal proportion reduction on a target object and establishing a model, and comprises a mark capturing unit, a near point radiation unit, a grid line connection unit and a curved surface generation unit, wherein the mark capturing unit, the near point radiation unit, the grid line connection unit and the curved surface generation unit are connected in sequence;
the dimension calculation module calculates the actual dimension of the target object according to the actual proportion by calculating the dimension of the model, and comprises an area block extraction unit, a curved surface extension tiling unit, an area segmentation union unit, a model compression simulation unit, a difference calculation unit and a labeling data generation unit, wherein the area block extraction unit, the curved surface extension tiling unit, the area segmentation union unit and the model compression simulation unit are all connected with the difference calculation unit, and the output end of the difference calculation unit is electrically connected with the input end of the labeling data generation unit.
Preferably, the electric zoom lens is used for zooming the shooting lens, the high-definition camera shoots a multi-angle view of the target object, the target object is marked with a plurality of positioning points by using a label, the positioning points comprise a reference positioning point and a reference positioning point, the reference positioning point in each view angle is aligned by using a cross cursor positioner, and a front view of the multi-view angle is shot.
Preferably, the grid reference adaptation unit builds transparent grid lines on the picture when processing the electronic picture, captures reference positioning points when the grid lines are built, builds straight lines by taking the straight line distance between the two reference positioning points as a unit length, builds vertical lines until the transparent grid lines are built in the unit length, coordinate data are arranged on each grid line, the transparent grid lines are fixedly attached to the electronic picture after the building is completed, the grid lines are simultaneously enlarged or reduced along with the electronic picture, the target object identification marking unit highlights the target object from the electronic picture, and particularly uses color filling and boundary contour marking, and the region approval unit confirms or manually operates to selectively adjust the marking region of the target object.
Preferably, the mark capturing unit extracts and stores the mark region of the confirmed object, the near point radiating unit establishes a depth point bitmap of each positioning point in the object picture and the divergent points between the adjacent positioning points, the grid line connecting unit connects the depth point maps to form a grid outline, and the curved surface generating unit fills the area between the lines of the grid outline to form a solid model corresponding to the object.
Preferably, the region block extracting unit extracts a multi-angle projection view of the solid model, the curved surface extending tiling unit stretches and flattens the appearance depth of the multi-angle view to obtain the tiling image, the region segmentation union unit segments and fills the projection view or the tiling image on the transparent grid line, the grid unit on the transparent grid line is used for constructing a proportion gradient, the model compression simulation unit is used for stacking the solid model deformation constructed by the corresponding target object in a three-dimensional grid constructed in unit length, the difference calculation unit is used for calculating the physical model length, the physical model projection view area, the physical model appearance area and the physical model volume according to the coordinate difference value of the two-dimensional plan view and the three-dimensional stacking block after the union, and inputting the actual sizes of two reference positioning points used for constructing the transparent grid line into the labeling data generating unit, and calculating each item of data of the target object according to the ratio of the size of the solid model.
The size calculation module is also provided with a volume direct calculation unit, the volume direct calculation unit directly calculates the volume of the target object according to a volume calculation model according to the reading of the three-dimensional image of the target object area in the grid line distance difference, and the volume calculation model has the following specific formula:
V=0.25×L×W×H
wherein L is the longest diameter of the object three-dimensional model, W is the longest transverse diameter perpendicular to the longest diameter, H is the height of the object, and V is the estimated volume of the object.
Preferably, the image display screen displays a list of all data of the photographed picture and the target object, and the data export module is used for exporting the picture and the data.
Preferably, an amplifying display unit and a size display unit are arranged in the image display screen, the amplifying display unit is used for amplifying the image of the target object, and the size display unit is used for displaying various size data of the target object area.
In the technical scheme, the application has the technical effects and advantages that:
the object is highlighted and a model is built through the image acquisition system, the image processing system and the image display system, the physical model length, the projection view area, the appearance area and the volume of the object are calculated according to actual needs, the model proportion is calculated according to actual reference point data, various data of the actual object are obtained and displayed on the image display screen, meanwhile, transparent grid marks are arranged on the image display screen, the size, the perspective area, the three-dimensional body surface area and the volume of the object are intuitively displayed, the readability is high, the manual workload is reduced, and the measurement error is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a block diagram of the overall system architecture of the present application;
FIG. 2 is a schematic diagram of a picture processing module according to the present application;
FIG. 3 is a schematic diagram of a model building block of the present application;
FIG. 4 is a schematic diagram of a dimension calculation module according to the present application;
FIG. 5 is a schematic diagram of an image display screen according to the present application;
FIG. 6 is a schematic diagram of the structure of the product of the present application.
Reference numerals illustrate:
the system comprises a 1 image acquisition system, a 2 image processing system, a 3 image display system, a 4 high-definition camera, a 5 electric zoom lens, a 6 cross cursor positioner, a 7 image processing module, a 71 image acquisition unit, a 72 grid reference adaptation unit, a 73 object identification marking unit, a 74 region approval unit, an 8 model construction module, a 81 mark capturing unit, a 82 near point radiation unit, a 83 grid line connection unit, a 84 curved surface generation unit, a 9-size calculation module, a 91 region block extraction unit, a 92 curved surface extension tiling unit, a 93 region segmentation union unit, a 94 model compression simulation unit, a 95 difference calculation unit, a 96 mark data generation unit, a 10 image display screen, a 101 size display unit, a 102 amplifying display unit, a 11 data export module, a 12 moving vehicle body, a 13 moving rail frame, a 14 moving sleeve, a 15 electric push rod, a 16 substrate, a 17 turntable and 18 centering shafts.
Detailed Description
In order to make the technical scheme of the present application better understood by those skilled in the art, the present application will be further described in detail with reference to the accompanying drawings.
The application provides an automatic range-finding standard area and volume camera for experiments as shown in figures 1-6, which comprises a mobile car body 12, wherein the top of the mobile car body 12 is fixedly connected with a mobile rail frame 13, the top of the mobile rail frame 13 is slidably connected with a mobile sleeve 14, the top of the mobile sleeve 14 is fixedly connected with an electric push rod 15, the top of the electric push rod 15 is fixedly connected with a substrate 16, the top of the substrate 16 is rotatably connected with a rotary table 17, the axis of the rotary table 17 is fixedly connected with a centering shaft 18, the top of the centering shaft 18 is detachably connected with a high-definition camera 4, the connecting end of the high-definition camera 4 is provided with an image acquisition system 1, an image processing system 2 and an image display system 3, the image acquisition system 1 and the image display system 3 are respectively arranged at the input end and the output end of the image processing system 2, the image acquisition system 1 is used for acquiring shooting pictures of the high-definition camera 4, the shooting end of the high-definition camera 4 is provided with an electric zoom lens 5, one side of the high-definition camera 4 is also provided with a cross cursor positioner 6, the image processing system 2 comprises a picture processing module 7, a calculation module 8 and a size module 9, and the image display module 3 is connected with the image display system 11;
the image processing module 7 is configured to process the photographed high-definition image to obtain a target object area, the image processing module 7 includes an image acquisition unit 71, a grid reference adaptation unit 72, a target object identification marking unit 73, and an area approval unit 74, and the image acquisition unit 71 receives an electronic image photographed by the high-definition camera 4, and the image acquisition unit 71, the grid reference adaptation unit 72, the target object identification marking unit 73, and the area approval unit 74 are sequentially connected to process the electronic image successively;
the model construction module 8 is used for performing reference quantity equal proportion reduction on a target object and establishing a model, the model construction module 8 comprises a mark capturing unit 81, a near point radiation unit 82, a grid line connection unit 83 and a curved surface generation unit 84, and the mark capturing unit 81, the near point radiation unit 82, the grid line connection unit 83 and the curved surface generation unit 84 are sequentially connected;
the size calculating module 9 calculates the actual size of the target object according to the actual proportion by calculating the size of the model, the size calculating module 9 comprises an area block extracting unit 91, a curved surface extending and tiling unit 92, an area dividing and tiling unit 93, a model compression simulating unit 94, a difference calculating unit 95 and a marking data generating unit 96, the area block extracting unit 91, the curved surface extending and tiling unit 92, the area dividing and tiling unit 93 and the model compression simulating unit 94 are all connected with the difference calculating unit 95, and the output end of the difference calculating unit 95 is electrically connected with the input end of the marking data generating unit 96.
Further, in the above technical solution, the electric zoom lens 5 is configured to zoom the photographing lens, the high-definition camera 4 photographs a multi-angle view of the target object, marks a plurality of positioning points on the target object by using a label, including a reference positioning point and a reference positioning point, aligns the reference positioning point in each view angle by using the cross cursor positioner 6, and photographs a front view of the multi-view angle.
Further, in the above technical solution, the grid reference adapting unit 72 builds transparent grid lines on the picture when processing the electronic picture, captures reference positioning points when the grid lines are built, builds straight lines with the straight line distance between two reference positioning points as a unit length, builds vertical lines until the transparent grid lines are built with the unit length, each grid line is provided with coordinate data, the transparent grid lines are fixedly attached to the electronic picture after the building is completed, the grid lines follow the electronic picture and simultaneously zoom in or zoom out, the object identification marking unit 73 highlights the object from the electronic picture, specifically uses color filling and boundary contour marking, and the area approval unit 74 performs confirmation or manual operation selection adjustment on the marked area of the object.
Further, in the above-mentioned technical solution, the mark capturing unit 81 extracts and stores the mark region of the confirmed object, the near point radiation unit 82 establishes a depth point bitmap of the divergent points between each positioning point and its adjacent positioning points in the object picture, the grid line connection unit 83 connects the depth point bitmap to form a grid outline, and the curved surface generating unit 84 fills the area between the lines of the grid outline to form a solid model corresponding to the object.
Further, in the above technical solution, the area block extracting unit 91 extracts a multi-angle projection view of the solid model, the curved surface extending and tiling unit 92 stretches and flattens the appearance depth of the multi-angle view to obtain a tiling image, the area dividing and integrating unit 93 segments and fills the projection view or tiling image on the transparent grid line, and builds a proportion gradient on the grid unit on the transparent grid line, the model compressing and simulating unit 94 stacks the solid model deformation built by the corresponding object in the three-dimensional grid built in unit length, the difference calculating unit 95 calculates the physical model length, the physical model projection view area, the physical model appearance area and the physical model volume of the integrated two-dimensional plan and three-dimensional stacking blocks according to the coordinate difference, and inputs the actual sizes of two reference positioning points used for building the transparent grid line and the physical model size into the labeling data generating unit 96, and calculates each item of the object according to the ratio.
The size calculation module 9 is also provided with a volume direct calculation unit, the volume direct calculation unit directly calculates the volume of the target object according to a volume calculation model according to the reading of the three-dimensional image of the target object area in the grid line distance difference, and the volume calculation model has the following specific formula:
V=0.25×L×W×H
wherein L is the longest diameter of the object three-dimensional model, W is the longest transverse diameter perpendicular to the longest diameter, H is the height of the object, and V is the estimated volume of the object.
Further, in the above technical solution, the image display screen 10 displays a shot picture and a list of various data of the object, and the data deriving module 11 is configured to derive the picture and the data.
Further, in the above technical solution, an enlarged display unit 102 and a size display unit 101 are disposed in the image display screen 10, where the enlarged display unit 102 is used for enlarging the image of the target object, and the size display unit 101 is used for displaying the size data of each item of the target object area.
Example 1:
the application is used for shooting subcutaneous transplanted tumor, when shooting, a plurality of positioning points are marked by using labeling on the surface of the tumor, wherein the positioning points comprise a reference positioning point with four sides of front view angles and a plurality of reference positioning points, after the electric zoom lens 5 is adjusted to correspond to the best shooting focal length, the cross cursor positioner 6 is used for aligning the reference positioning points in each view angle, a front view photo with multiple view angles is shot, a picture processing module 7 constructs a transparent grid line which follows the change on the picture, coordinate data is attached to the transparent grid line, the tumor part is highlighted on the photo, color filling and boundary contour line marking are used, a model construction module 8 extracts a selected tumor area, a tumor solid model is established, a size calculation module 9 calculates the actual size, the projection view area, the appearance area and the volume of the tumor solid model, and various data of the tumor are calculated according to the ratio with the size of the solid model and are displayed on an image display screen 10.
The working principle of the application is as follows:
referring to fig. 1-6 of the specification, when the application is used, firstly, a plurality of positioning points are marked on the surface of a target object by using labeling, wherein the positioning points comprise a reference positioning point with four sides of a front view angle and a plurality of reference positioning points, after the electric zoom lens 5 is adjusted to correspond to an optimal photographing focal distance, the positioning points are aligned to the reference positioning points in each view angle by using a cross cursor positioner 6, a front view picture with multiple view angles is photographed, a picture processing module 7 constructs a transparent grid line which follows the change on a picture, coordinate data are attached to the transparent grid line, the target object is highlighted on the picture, and the grid line is marked by using color filling and boundary contour lines, a model construction module 8 extracts a selected target object area, a grid contour is constructed according to the interconnection lines of the coordinate points and the radiation points, a target object entity model is further established, a dimension calculation module 9 extracts a multi-angle projection view of the target object entity model, a curved surface extension tiling unit 92 obtains a tiling picture by stretching the appearance depth of the view of the electric zoom lens, a region segmentation and union unit 93 segments the projection view or tiling picture on the transparent grid line, a transparent grid unit construction proportion is constructed on the transparent grid line, a corresponding to the target object entity area is compressed and a dimension simulation unit 94 is stacked on the basis of the three-dimensional grid line, a dimension of the three-dimensional entity model is calculated by using the dimension of the three-dimensional entity model and the dimension of the three-dimensional entity model is calculated, the three-dimensional entity model is stacked by the dimension of the dimension entity model is calculated, and the dimension of the entity model is calculated by the dimension of the dimension unit is calculated by the dimension of the dimension unit and the dimension of the entity model is a graph and the dimension of the entity model is stacked by the dimension of the entity model and the dimension of the figure is based, and the image display screen 10 displays pictures, and simultaneously, the transparent grid marks are carried, and the size, perspective area, three-dimensional body surface area and volume of the target object are intuitively displayed and have strong readability.
While certain exemplary embodiments of the present application have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that modifications may be made to the described embodiments in various different ways without departing from the spirit and scope of the application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive of the scope of the application, which is defined by the appended claims.

Claims (8)

1. An automatic remote control area and volume camera for experiments comprises a movable car body (12), and is characterized in that: the utility model is characterized in that the top of the mobile vehicle body (12) is fixedly connected with a mobile rail frame (13), the top of the mobile rail frame (13) is slidably connected with a mobile sleeve (14), the top of the mobile sleeve (14) is fixedly connected with an electric push rod (15), the top of the electric push rod (15) is fixedly connected with a base plate (16), the top of the base plate (16) is rotationally connected with a rotary table (17), the axis of the rotary table (17) is fixedly connected with a centering shaft (18), the top of the centering shaft (18) is detachably connected with a high-definition camera (4), the connecting end of the high-definition camera (4) is provided with an image acquisition system (1), an image processing system (2) and an image display system (3), the image acquisition system (1) and the image display system (3) are respectively arranged at the input end and the output end of the image processing system (2), the image acquisition system (1) is used for acquiring shooting pictures of the high-definition camera (4), the shooting end of the camera (4) is provided with an electric zoom lens (5), one side of the high-definition camera (4) is also provided with a cross cursor positioning device (6), the image processing system (8) comprises a display module (8) and a display module (10), the connecting end of the image display system (3) is provided with a data export module (11);
the image processing module (7) is used for processing the shot high-definition image to obtain a target object area, the image processing module (7) comprises an image acquisition unit (71), a grid reference adaptation unit (72), a target object identification marking unit (73) and an area approval unit (74), the image acquisition unit (71) receives an electronic image shot by the high-definition camera (4), and the image acquisition unit (71), the grid reference adaptation unit (72), the target object identification marking unit (73) and the area approval unit (74) are sequentially connected to process the electronic image successively;
the model construction module (8) is used for carrying out reference quantity equal proportion reduction on a target object and establishing a model, the model construction module (8) comprises a mark capturing unit (81), a near point radiation unit (82), a grid line connection unit (83) and a curved surface generation unit (84), and the mark capturing unit (81), the near point radiation unit (82), the grid line connection unit (83) and the curved surface generation unit (84) are sequentially connected;
the size calculation module (9) calculates the actual size of the target object according to the actual proportion by calculating the size of the model, the size calculation module (9) comprises an area block extraction unit (91), a curved surface extension tiling unit (92), an area segmentation union unit (93), a model compression simulation unit (94), a difference set calculation unit (95) and a marking data generation unit (96), the area block extraction unit (91), the curved surface extension tiling unit (92), the area segmentation union unit (93) and the model compression simulation unit (94) are all connected with the difference set calculation unit (95), and the output end of the difference set calculation unit (95) is electrically connected with the input end of the marking data generation unit (96).
2. An automatic range finding area, volume camera for experiments as claimed in claim 1, wherein: the electronic zoom lens (5) is used for zooming to the shooting lens, the high definition camera (4) shoots the multi-angle view of the target object, a plurality of locating points are marked on the target object by using the label, the electronic zoom lens comprises a reference locating point and a reference locating point, the reference locating point in each view angle is aligned by using the cross cursor positioner (6), and the front view of the multi-view angle is shot.
3. An automatic range finding area, volume camera for experiments as claimed in claim 2, wherein: the grid reference adaptation unit (72) builds transparent grid lines on the picture when the electronic picture is processed, captures reference positioning points when the grid lines are built, builds straight lines by taking the straight line distance between the two reference positioning points as unit length, builds vertical lines until the transparent grid lines are built in unit length, coordinate data are arranged on each grid line, the transparent grid lines are fixedly attached to the electronic picture after the building is completed, the grid lines are simultaneously enlarged or reduced along with the electronic picture, the target object identification marking unit (73) highlights the target object from the electronic picture, and particularly uses color filling and boundary contour marking, and the region approval unit (74) confirms or manually operates and selects and adjusts the marking region of the target object.
4. An experimental automatic range finding area, volume camera according to claim 3, characterized in that: the mark capturing unit (81) extracts and stores the mark region of the confirmed target object, the near point radiation unit (82) establishes a depth point bitmap of each locating point in the target object picture and the divergence point between the adjacent locating points, the grid line connection unit (83) connects the depth point bitmap to form a grid outline, and the curved surface generation unit (84) fills the area between the lines of the grid outline to form a solid model corresponding to the target object.
5. An automatic laboratory remote control area, volume camera as set forth in claim 4, wherein: the region block extracting unit (91) extracts a multi-angle projection view of the solid model, the curved surface extending tiling unit (92) stretches and flattens the appearance depth of the multi-angle view to obtain a tiling image, the region segmentation union unit (93) segments and fills the projection view or the tiling image on a transparent grid line, the grid unit on the transparent grid line is used for constructing a proportion gradient, the model compression simulating unit (94) stacks the solid model deformation constructed by the corresponding target object in a three-dimensional grid constructed in unit length, the difference calculating unit (95) calculates the physical model length, the physical model projection view area, the physical model appearance area and the physical model volume of the two-dimensional plan view and three-dimensional stacking block after union according to coordinate difference values, and the actual sizes of two reference positioning points used for constructing the transparent grid line and the physical model size are input into the marking data generating unit (96) to calculate various data of the target object according to the ratio.
6. An automatic laboratory remote control area, volume camera as set forth in claim 5, wherein: the size calculation module (9) is also provided with a volume direct calculation unit, the volume direct calculation unit directly calculates the volume of the target object according to a volume calculation model according to the reading of the three-dimensional image of the target object area on the grid line distance difference, and the volume calculation model has the following specific formula:
V=0.25×L×W×H
wherein L is the longest diameter of the object three-dimensional model, W is the longest transverse diameter perpendicular to the longest diameter, H is the height of the object, and V is the estimated volume of the object.
7. An automatic laboratory remote control area, volume camera as set forth in claim 5, wherein: the image display screen (10) displays the shot pictures and the lists of various data of the target object, and the data export module (11) is used for exporting the pictures and the data.
8. An automatic laboratory remote control area, volume camera as set forth in claim 7, wherein: an amplifying display unit (102) and a size display unit (101) are arranged in the image display screen (10), the amplifying display unit (102) is used for amplifying an image of a target object, and the size display unit (101) is used for displaying various size data of the target object area.
CN202111576980.6A 2021-12-22 2021-12-22 Automatic range-finding area and volume camera for experiment Active CN114338981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111576980.6A CN114338981B (en) 2021-12-22 2021-12-22 Automatic range-finding area and volume camera for experiment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111576980.6A CN114338981B (en) 2021-12-22 2021-12-22 Automatic range-finding area and volume camera for experiment

Publications (2)

Publication Number Publication Date
CN114338981A CN114338981A (en) 2022-04-12
CN114338981B true CN114338981B (en) 2023-11-07

Family

ID=81054022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111576980.6A Active CN114338981B (en) 2021-12-22 2021-12-22 Automatic range-finding area and volume camera for experiment

Country Status (1)

Country Link
CN (1) CN114338981B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196133A1 (en) * 2018-04-09 2019-10-17 杭州瑞杰珑科技有限公司 Head-mounted visual aid device
CN112465960A (en) * 2020-12-18 2021-03-09 天目爱视(北京)科技有限公司 Dimension calibration device and method for three-dimensional model
CN112561930A (en) * 2020-12-10 2021-03-26 武汉光庭信息技术股份有限公司 System and method for real-time framing of target in video stream
WO2021185220A1 (en) * 2020-03-16 2021-09-23 左忠斌 Three-dimensional model construction and measurement method based on coordinate measurement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019196133A1 (en) * 2018-04-09 2019-10-17 杭州瑞杰珑科技有限公司 Head-mounted visual aid device
WO2021185220A1 (en) * 2020-03-16 2021-09-23 左忠斌 Three-dimensional model construction and measurement method based on coordinate measurement
CN112561930A (en) * 2020-12-10 2021-03-26 武汉光庭信息技术股份有限公司 System and method for real-time framing of target in video stream
CN112465960A (en) * 2020-12-18 2021-03-09 天目爱视(北京)科技有限公司 Dimension calibration device and method for three-dimensional model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
用于目标测距的单目视觉测量方法;韩延祥;张志胜;戴敏;光学精密工程;第19卷(第5期);全文 *

Also Published As

Publication number Publication date
CN114338981A (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN106097300B (en) A kind of polyphaser scaling method based on high-precision motion platform
CN103714535B (en) Binocular vision measurement system camera parameter online adjustment method
CN107610185A (en) A kind of fisheye camera fast calibration device and scaling method
CN103292695A (en) Monocular stereoscopic vision measuring method
CN108629829B (en) Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera
CN110009690A (en) Binocular stereo vision image measuring method based on polar curve correction
CN107633532B (en) Point cloud fusion method and system based on white light scanner
CN104279960A (en) Method for measuring size of object through mobile device
CN105513128A (en) Kinect-based three-dimensional data fusion processing method
CN102221331A (en) Measuring method based on asymmetric binocular stereovision technology
CN106204560A (en) Colony picker automatic calibration method
CN109272555A (en) A kind of external parameter of RGB-D camera obtains and scaling method
CN106651958B (en) Object recognition device and method for moving object
CN104933704A (en) Three-dimensional scanning method and system
CN101551907A (en) Method for multi-camera automated high-precision calibration
CN104123726B (en) Heavy forging measuring system scaling method based on vanishing point
CN201007646Y (en) Liquid auxiliary dislocation scanning three-dimensional shape measuring apparatus
CN104180770A (en) Three-dimensional shape detection method for tool wear
CN105825501A (en) Model guided 3D printing forehead and facial tumor treatment guide plate intelligent quality detection method
CN114338981B (en) Automatic range-finding area and volume camera for experiment
CN108596929A (en) The light of fusion plane grid depth calculation cuts data modeling reconstructing method
CN105931177B (en) Image acquisition processing device and method under specific environment
CN102999895A (en) Method for linearly solving intrinsic parameters of camera by aid of two concentric circles
CN114792345A (en) Calibration method based on monocular structured light system
CN110136203B (en) Calibration method and calibration system of TOF equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant