CN117896629A - Control method, device and medium of confocal camera - Google Patents

Control method, device and medium of confocal camera Download PDF

Info

Publication number
CN117896629A
CN117896629A CN202410109393.3A CN202410109393A CN117896629A CN 117896629 A CN117896629 A CN 117896629A CN 202410109393 A CN202410109393 A CN 202410109393A CN 117896629 A CN117896629 A CN 117896629A
Authority
CN
China
Prior art keywords
image
images
different positions
height
preferred
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410109393.3A
Other languages
Chinese (zh)
Inventor
蒋泽忠
吴绍秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huading Intelligent Equipment Dongguan Co ltd
Original Assignee
Huading Intelligent Equipment Dongguan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huading Intelligent Equipment Dongguan Co ltd filed Critical Huading Intelligent Equipment Dongguan Co ltd
Priority to CN202410109393.3A priority Critical patent/CN117896629A/en
Publication of CN117896629A publication Critical patent/CN117896629A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a control method, a device and a medium of a confocal camera. Outputting an optical signal based on the white light camera when the color camera shoots an image; traversing the optical signals and shooting images presented by the current height at a plurality of different positions; scanning images presented by a plurality of different positions, and defining imaging states of the images presented by the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane; the method comprises the steps of selecting a preferred image with the current height according to imaging states of images presented at a plurality of different positions, shooting the images presented at the current height at the plurality of different positions, presenting corresponding imaging states to the images in the plurality of positions so as to define the preferred image according to the plurality of imaging states, further acquiring the preferred image with the different heights, synthesizing a three-dimensional model based on the preferred image with the current height and the preferred image with the different heights, guaranteeing the synthesizing precision of the three-dimensional model, and guaranteeing the precision of the preferred image.

Description

Control method, device and medium of confocal camera
Technical Field
The application relates to the technical field of confocal cameras, in particular to a control method, a device and a medium of a confocal camera.
Background
Along with development of science and technology, the confocal camera is gradually applied to the industrial field and images a target object, at this time, the confocal camera shoots the target object at different heights, the images corresponding to the target object can be multiple at different heights, the multiple images generally adopt the manual screening direction to carry out single evaluation on the images so as to define definition, however, the images are not evaluated in the focusing plane layer, so that the screening of the images at different heights is influenced, and further the synthesis precision of a subsequent three-dimensional model is influenced.
Disclosure of Invention
The embodiment of the application provides a control method, a device and a medium of a confocal camera, which are used for shooting images presented by the current height at a plurality of different positions at least to a certain extent, presenting corresponding imaging states to the images in the positions so as to define better images according to the imaging states, further acquiring the better images of the different heights, synthesizing a three-dimensional model based on the better images of the current height and the better images of the different heights, ensuring the synthesizing precision of the three-dimensional model, and simultaneously ensuring the precision of the better images.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
According to an aspect of the embodiments of the present application, there is provided a control method of a confocal camera, including:
outputting an optical signal based on the white light camera when the color camera shoots an image;
traversing the optical signals and shooting images presented by the current height at a plurality of different positions;
scanning images presented by a plurality of different positions, and defining imaging states of the images presented by the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane;
selecting a preferred image with a current height according to imaging states of images presented at a plurality of different positions, and performing height adjustment based on the current height to acquire the preferred images with different heights;
the three-dimensional model is synthesized based on the preferred image of the current height and the preferred images of different heights.
Optionally, when the color camera shoots an image, the white light camera outputs an optical signal, including:
acquiring a shooting signal of a color camera;
triggering primary shooting of a color camera according to shooting signals, and forming a primary image;
traversing the primary image, and positioning a positioning area of the target object and the type of the positioning target object;
regulating and controlling the orientation of the color camera according to the positioning area of the target object;
regulating movement of the color camera based on the positioning area of the target object and the type of the positioning target object, and shooting an image based on the color camera, at this time, outputting an optical signal based on the white light camera
The target object is photographed at different heights while the white light camera outputs an optical signal.
Optionally, the traversing the optical signal and capturing images of the current elevation presentation at a plurality of different locations includes:
traversing the optical signal and locating a target object along the optical signal;
triggering the color camera to shoot the target object according to the positioning signal of the target object;
shooting images presented by the current height at a plurality of different positions when the color camera moves along the height direction;
the images are categorized based on height and a set of images corresponding to the height is formed.
Optionally, the scanning the images presented in the multiple different positions and defining imaging states of the images presented in the multiple different positions, where the multiple different positions respectively correspond to the same focusing plane includes:
acquiring an image set with corresponding height, and marking the corresponding height;
defining a first image based on the set of images;
detecting a first image based on a plurality of different positions at the same height, wherein the plurality of different positions respectively correspond to the same focusing plane and are correspondingly provided with a photoelectric detector;
the imaging state of the first image is output according to the photodetectors at different positions.
Optionally, the scanning the images presented in the multiple different positions and defining imaging states of the images presented in the multiple different positions, where the multiple different positions respectively correspond to the same focusing plane, further includes:
defining a focusing plane and recording parameters of the focusing plane;
locating a plurality of probe points of a focal plane;
correlating based on the plurality of probe points;
and constructing a dynamic relation according to the parameters of the plurality of detection points and the focusing plane, and fixing the relative positions and the number of the plurality of detection points, and simultaneously positioning the traversal range of the image based on the relative positions among the plurality of detection points.
Optionally, the selecting a preferred image of the current height according to the imaging states of the images presented at the plurality of different positions, and performing height adjustment based on the current height to obtain the preferred image of the different heights includes:
acquiring imaging states of images presented by a plurality of different positions;
outputting a state level of the image according to the plurality of imaging states and the state learning model;
if the state level of the image is smaller than the preset state level, selecting a replacement image at the same height;
and further evaluating the state of the replacement image until the state level reaches a preset state level, taking the replacement image as a preferred image of the current height, and simultaneously, carrying out height adjustment based on the current height so as to obtain the preferred images of different heights.
Optionally, the synthesizing the three-dimensional model based on the preferred image of the current height and the preferred image of the different heights includes:
acquiring a preferred image of a current height and preferred images of different heights;
sequentially placing a plurality of preferred images along the height order;
outputting a plurality of stereoscopic features based on the plurality of preferred images and the image feature model;
a three-dimensional model is synthesized based on the plurality of stereo features.
Optionally, the synthesizing the three-dimensional model based on the preferred image of the current height and the preferred image of the different heights further includes:
traversing the three-dimensional model, and screening convex and concave parts of the three-dimensional model;
marking convex-concave parts of the three-dimensional model, and recording three-dimensional coordinates of the convex-concave parts;
defining a measuring height according to the three-dimensional coordinates of the convex and concave parts, and backtracking a corresponding image set based on the measuring height;
defining replaceable features based on at least three images in the image set, and replacing the replaceable features with the concave-convex parts of the three-dimensional model so as to smoothly process the three-dimensional model.
According to an aspect of the embodiments of the present application, there is provided a control device of a confocal camera, including:
an output module for outputting an optical signal based on the white light camera when the color camera captures an image;
the shooting module is used for traversing the optical signals and shooting images presented by the current height at a plurality of different positions;
the definition module is used for scanning the images presented at a plurality of different positions and defining imaging states of the images presented at the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane;
the evaluation module is used for selecting a preferred image with the current height according to the imaging states of the images presented at a plurality of different positions, and carrying out height adjustment based on the current height so as to acquire the preferred images with different heights;
and the synthesis module is used for synthesizing the three-dimensional model based on the preferred image of the current height and the preferred images of different heights.
According to an aspect of the embodiments of the present application, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements a control method of a confocal camera as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of controlling a confocal camera as described in the above embodiments.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the control method of the confocal camera provided in the above-described embodiment.
In some embodiments of the present application, when a color camera captures an image, an optical signal is output based on a white light camera; traversing the optical signals and shooting images presented by the current height at a plurality of different positions; scanning images presented by a plurality of different positions, and defining imaging states of the images presented by the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane; the method comprises the steps of selecting a preferred image with the current height according to imaging states of images presented at a plurality of different positions, shooting the images presented at the current height at the plurality of different positions, presenting corresponding imaging states to the images in the plurality of positions so as to define the preferred image according to the plurality of imaging states, further acquiring the preferred image with the different heights, synthesizing a three-dimensional model based on the preferred image with the current height and the preferred image with the different heights, guaranteeing the synthesizing precision of the three-dimensional model, and guaranteeing the precision of the preferred image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 illustrates a flow diagram of a method of controlling a confocal camera according to one embodiment of the present application;
FIG. 2 shows a schematic flow chart of S110 in FIG. 1;
FIG. 3 shows a schematic flow chart of S120 in FIG. 1;
FIG. 4 shows a schematic flow chart of S130 in FIG. 1;
fig. 5 shows a schematic flow chart of S140 in fig. 1;
FIG. 6 shows a schematic flow chart of S150 in FIG. 1;
FIG. 7 illustrates a practical schematic of a control method of a confocal camera according to one embodiment of the application;
FIG. 8 illustrates a block diagram of a control device of a confocal camera according to an embodiment of the application;
fig. 9 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be synthesized or partially synthesized, so that the order of actual execution may be changed according to actual situations.
Fig. 1 shows a flow diagram of a method of controlling a confocal camera according to an embodiment of the present application. The method may be applied to a confocal camera and images a target object.
Referring to fig. 1 to 9, the control method of the confocal camera at least includes steps S110 to S150, and the following details (the following description will take the application of the method to the confocal camera as an example):
step S110, outputting an optical signal based on the white light camera when the color camera shoots an image;
step S120, traversing the optical signals, and shooting images presented by the current height at a plurality of different positions;
step S130, scanning images presented at a plurality of different positions, and defining imaging states of the images presented at the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane;
step S140, selecting a preferred image with a current height according to imaging states of the images presented at a plurality of different positions, and performing height adjustment based on the current height to obtain the preferred image with different heights;
step S150, synthesizing a three-dimensional model based on the preferred image of the current height and the preferred images of different heights.
In some embodiments of the present application, when a color camera captures an image, an optical signal is output based on a white light camera; traversing the optical signals and shooting images presented by the current height at a plurality of different positions; scanning images presented by a plurality of different positions, and defining imaging states of the images presented by the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane; the method comprises the steps of selecting a preferred image with the current height according to imaging states of images presented at a plurality of different positions, shooting the images presented at the current height at the plurality of different positions, presenting corresponding imaging states to the images in the plurality of positions so as to define the preferred image according to the plurality of imaging states, further acquiring the preferred image with the different heights, synthesizing a three-dimensional model based on the preferred image with the current height and the preferred image with the different heights, guaranteeing the synthesizing precision of the three-dimensional model, and guaranteeing the precision of the preferred image.
In step S110, when the color camera captures an image, an optical signal is output based on the white light camera.
In the embodiment of the application, shooting is performed on a target object and is performed through a color camera, at this time, an optical signal is output based on a white light camera, and the target object is positioned along the optical signal, so that shooting control is performed on the target object.
The specific steps are as follows:
step S111, acquiring shooting signals of a color camera;
step S112, triggering primary shooting of a color camera according to shooting signals, and forming a primary image;
step S113, traversing the primary image, and positioning a positioning area of the target object and the type of the target object;
step S114, regulating and controlling the orientation of the color camera according to the positioning area of the target object;
step S115, regulating and controlling the movement of the color camera based on the positioning area of the target object and the type of the positioning target object, and shooting an image based on the color camera, wherein at the moment, an optical signal is output based on the white light camera;
step S116, moving the color camera, shooting the target object at different heights, and outputting an optical signal based on the white light camera.
In the embodiment of the application, a shooting signal of the color camera is obtained, and the shooting signal can be used as a communication signal or an instruction so as to trigger the color camera to shoot a target object, and at the moment, the primary shooting of the color camera is triggered according to the shooting signal, and a primary image is formed; and aiming at the primary shooting of the color camera on the target object so as to position the target object through the primary image, traversing the primary image, and positioning the positioning area of the target object and the type of the positioning target object so as to further utilize the positioning area of the target object and the type of the positioning target object.
Then, the orientation of the color camera is regulated and controlled according to the positioning area of the target object, so that the positioning area of the target object is adaptively processed, the color camera is ensured to shoot the target object in a proper orientation, meanwhile, the movement of the color camera is regulated and controlled based on the positioning area of the target object and the type of the positioning target object, and an image is shot based on the color camera, and at the moment, an optical signal is output based on the white light camera.
Optionally, the positioning area of the target object and the type of the positioning target object are introduced, the positioning area of the target object and the type of the positioning target object are used as influencing factors of the movement of the color camera, the movement rule of the color camera is influenced by controlling the positioning area of the target object and the type of the positioning target object, meanwhile, the target object is shot at different heights in the movement of the color camera, and meanwhile, the light signal is output based on the white light camera. Such as: the movement rule of the color camera is to raise a preset distance each time, and the preset distance is influenced by the positioning area of the target object and the type of the positioning target object.
In step S120, the optical signal is traversed and the image presented at the current altitude is taken at a plurality of different positions.
In an embodiment of the present application, an optical signal is acquired and traversed to facilitate signal analysis of the optical signal, so as to control a target object along the optical signal, thereby positioning the target object.
The specific steps are as follows:
step S121, traversing the optical signal and positioning a target object along the optical signal;
step S122, shooting the target object by the color camera is triggered according to the positioning signal of the target object.
Step S123, when the color camera moves along the height direction, the image presented by the current height is shot at a plurality of different positions.
Step S124, classifying the images based on the heights, and forming image sets of corresponding heights.
In the embodiment of the application, the optical signal is traversed, and the target object is positioned along the optical signal, so that the positioning signal of the target object is triggered according to the optical signal, and further, the shooting of the color camera to the target object is triggered according to the positioning signal of the target object, and therefore the shooting effect of the color camera to the target object is ensured.
In the embodiment of the application, when the color camera moves along the height direction, images presented by the current height are shot at a plurality of different positions, and the different positions are all in the same focusing plane so as to evaluate the images presented by the current height at the different positions, so that corresponding control is performed on the locally different effects, at this time, the images are classified based on the heights, and an image set of the corresponding height is formed so as to facilitate subsequent processing of the image set.
In step S130, images presented at a plurality of different positions are scanned, and imaging states of the images presented at the plurality of different positions are defined, wherein the plurality of different positions respectively correspond to the same focus plane.
In the embodiment of the application, the images presented at a plurality of different positions are scanned, the images presented at the different positions are fully controlled, and then the imaging states of the images presented at the plurality of different positions are defined, so that the images are comprehensively evaluated based on the imaging states, wherein the plurality of different positions respectively correspond to the same focusing plane.
The specific steps are as follows:
step S131, acquiring an image set with corresponding height, and marking the corresponding height;
step S132, defining a first image based on the image set;
step S133, detecting a first image based on a plurality of different positions at the same height, wherein the plurality of different positions respectively correspond to the same focusing plane and are correspondingly provided with a photoelectric detector;
step S134, outputting the imaging state of the first image according to the photodetectors at different positions.
In an embodiment of the present application, an image set corresponding to a height is acquired, and the corresponding height is marked so as to collect a plurality of images in different heights, at this time, a first image is defined based on the image set, and when the first image is at the same height, the first image is detected based on a plurality of different positions, where the plurality of different positions respectively correspond to the same focal plane and are correspondingly configured with photodetectors.
Detecting the image through a plurality of photoelectric detectors and outputting a corresponding imaging state, wherein at the moment, when the image is at the same height, a first image is detected based on a plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane and are correspondingly provided with the photoelectric detectors; the imaging state of the first image is output according to the photodetectors at different positions, so that the integrity assessment of the imaging state of the first image is ensured, and the results output by the photodetectors are utilized for integrity consideration.
In another embodiment of the present application, the scanning the images presented at the plurality of different positions and defining the imaging states of the images presented at the plurality of different positions, where the plurality of different positions respectively correspond to the same focusing plane, further includes: defining a focusing plane and recording parameters of the focusing plane; locating a plurality of probe points of a focal plane; correlating based on the plurality of probe points; the dynamic relation is constructed according to the parameters of the detection points and the focusing plane, the relative positions and the number of the detection points are fixed, and meanwhile, the traversal range of the image is positioned based on the relative positions among the detection points, so that the local traversal of the image is carried out according to the traversal range, the controllability of the positions of the detection points is fully controlled, and the efficiency of image screening is improved.
In step S140, a preferred image of a current height is selected according to the imaging states of the images presented at the plurality of different positions, and a height adjustment is performed based on the current height to obtain the preferred image of different heights.
In the embodiment of the application, the preferred image of the current height is selected according to the imaging states of the images presented at a plurality of different positions, and the corresponding preferred image is matched for different heights, so that the images of different heights are controlled, and the subsequent synthesis of the images is facilitated.
The specific steps are as follows:
step S141, acquiring imaging states of images presented at a plurality of different positions.
Step S142, outputting a state level of the image according to the plurality of imaging states and the state learning model.
Step S143, if the state level of the image is smaller than the preset state level, selecting the replacement image at the same height.
Step S144, further evaluating the state of the replacement image until the state level reaches a preset state level, taking the replacement image as a preferred image of the current height, and simultaneously, carrying out height adjustment based on the current height to obtain preferred images of different heights.
In the embodiment of the application, the imaging states of the images presented at a plurality of different positions are acquired so as to comprehensively consider the imaging states, so that the state grade of the image is output according to the imaging states and the state learning model, the image is controlled through the state grade, and meanwhile, if the state grade of the image is smaller than the preset state grade, the replacement image is selected at the same height. And further evaluating the state of the replacement image until the state level reaches a preset state level, taking the replacement image as a preferred image of the current height, and simultaneously, carrying out height adjustment based on the current height so as to obtain the preferred images of different heights.
In step S150, a three-dimensional model is synthesized based on the preferred image of the current height and the preferred images of different heights.
At this time, the images presented by the current height are shot at a plurality of different positions, and the corresponding imaging states are presented to the images in the positions, so that the preferred images are defined according to the imaging states, the preferred images of different heights are obtained, the three-dimensional model is synthesized based on the preferred images of the current height and the preferred images of different heights, the synthesizing precision of the three-dimensional model is ensured, and meanwhile, the precision of the preferred images is ensured.
The specific steps are as follows:
step S151, acquiring a preferred image of the current height and a preferred image of different heights.
Step S152, sequentially placing the plurality of preferred images along the height order.
Step S153, outputting a plurality of stereo features based on the plurality of preferred images and the image feature model.
And step S154, synthesizing a three-dimensional model based on the plurality of three-dimensional features.
In the embodiment of the application, the preferred images of the current height and the preferred images of different heights are acquired, the preferred images are ordered in the height direction, at this time, the preferred images are sequentially arranged along the height order, a plurality of three-dimensional features are output based on the preferred images and the image feature model so as to facilitate synthesis of the three-dimensional features, and thus the three-dimensional model is synthesized based on the three-dimensional features.
The three-dimensional model is synthesized based on the preferred image of the current height and the preferred images of different heights, and the method further comprises the following steps: traversing the three-dimensional model, and screening convex and concave parts of the three-dimensional model; marking convex-concave parts of the three-dimensional model, and recording three-dimensional coordinates of the convex-concave parts; defining a measuring height according to the three-dimensional coordinates of the convex and concave parts, and backtracking a corresponding image set based on the measuring height; defining replaceable features based on at least three images in the image set, and replacing the replaceable features with the concave-convex parts of the three-dimensional model so as to smoothly process the three-dimensional model.
In the embodiment of the application, the vertical photographing is used for taking the picture by vertical movement And measuring the highest brightness part of each plane and each plane. The method comprises the following steps: the lens moves to the lowest. The lens moves upwards, and every time the lens moves for a unit distance, a picture is taken. And carrying out median filtering noise on the image, carrying out averaging, and obtaining a corresponding definition value through secondary blurring. And according to the maximum value of the definition of each image, obtaining a corresponding depth value. From the depth values and the image pixels, a sequence of depth-color images is composed.
For high-precision confocal measurement, images of various height planes are taken by vertical movement. The method comprises the following steps: the lens moves to the lowest. The lens moves upwards, and the white light source emits light every unit distance. When the return light rays trigger a plurality of photodetectors simultaneously, the depth of the object is recorded. The data are combined. And extracting a color image sequence of the vertical photographing according to the depth data measured by confocal, and synthesizing a three-dimensional model.
In some embodiments of the present application, when a color camera captures an image, an optical signal is output based on a white light camera; traversing the optical signals and shooting images presented by the current height at a plurality of different positions; scanning images presented by a plurality of different positions, and defining imaging states of the images presented by the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane; the method comprises the steps of selecting a preferred image with the current height according to imaging states of images presented at a plurality of different positions, shooting the images presented at the current height at the plurality of different positions, presenting corresponding imaging states to the images in the plurality of positions so as to define the preferred image according to the plurality of imaging states, further acquiring the preferred image with the different heights, synthesizing a three-dimensional model based on the preferred image with the current height and the preferred image with the different heights, guaranteeing the synthesizing precision of the three-dimensional model, and guaranteeing the precision of the preferred image.
The following describes an embodiment of an apparatus of the present application, which may be used to perform the control method of the confocal camera in the above-described embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the control method of the confocal camera described in the present application.
Fig. 8 shows a block diagram of a control device of a confocal camera according to an embodiment of the application.
Referring to fig. 8, a control device of a confocal camera according to an embodiment of the present application includes:
an output module 210 for outputting an optical signal based on the white light camera when the color camera captures an image;
the shooting module 220 is configured to traverse the optical signal and shoot images presented by the current height at a plurality of different positions;
a defining module 230, configured to scan images presented at a plurality of different positions, and define imaging states of the images presented at the plurality of different positions, where the plurality of different positions respectively correspond to a same focus plane;
the evaluation module 240 is configured to select a preferred image of a current height according to imaging states of images presented at a plurality of different positions, and perform height adjustment based on the current height to obtain the preferred image of different heights;
a synthesis module 250 for synthesizing the three-dimensional model based on the preferred image of the current height and the preferred images of different heights.
In one embodiment of the present application, there is also provided an electronic device including:
one or more processors;
and a storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of controlling a confocal camera as described in the previous embodiments.
In one example, FIG. 9 illustrates a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
It should be noted that, the computer system of the electronic device shown in fig. 9 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 9, the computer system includes a central processing unit (Central Processing Unit, CPU) 301 (i.e., a processor as described above) that can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 302 or a program loaded from a storage section 308 into a random access Memory (Random Access Memory, RAM) 303. It should be understood that RAM303 and ROM302 are just described as storage devices. In the RAM303, various programs and data required for the system operation are also stored. The CPU 301, ROM302, and RAM303 are connected to each other through a bus 304. An Input/Output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305: an input section 306 including a keyboard, a mouse, and the like; an output portion 307 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, a speaker, and the like; a storage section 308 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 310 as needed, so that a computer program read therefrom is installed into the storage section 308 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 309, and/or installed from the removable medium 311. When executed by a Central Processing Unit (CPU) 301, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. The control method of the confocal camera is characterized by being applied to the confocal camera;
the control method of the confocal camera comprises the following steps:
outputting an optical signal based on the white light camera when the color camera shoots an image;
traversing the optical signals and shooting images presented by the current height at a plurality of different positions;
scanning images presented by a plurality of different positions, and defining imaging states of the images presented by the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane;
selecting a preferred image with a current height according to imaging states of images presented at a plurality of different positions, and performing height adjustment based on the current height to acquire the preferred images with different heights;
the three-dimensional model is synthesized based on the preferred image of the current height and the preferred images of different heights.
2. The method of claim 1, wherein the outputting the light signal based on the white light camera while the color camera is capturing the image comprises:
acquiring a shooting signal of a color camera;
triggering primary shooting of a color camera according to shooting signals, and forming a primary image;
traversing the primary image, and positioning a positioning area of the target object and the type of the positioning target object;
regulating and controlling the orientation of the color camera according to the positioning area of the target object;
regulating and controlling the movement of the color camera based on the positioning area of the target object and the type of the positioning target object, and shooting an image based on the color camera, wherein at the moment, an optical signal is output based on the white light camera;
the target object is photographed at different heights while the white light camera outputs an optical signal.
3. The method of claim 1, wherein traversing the light signal and capturing images of the current elevation presentation at a plurality of different locations comprises:
traversing the optical signal and locating a target object along the optical signal;
triggering the color camera to shoot the target object according to the positioning signal of the target object;
shooting images presented by the current height at a plurality of different positions when the color camera moves along the height direction;
the images are categorized based on height and a set of images corresponding to the height is formed.
4. A method according to claim 3, wherein scanning the images presented at a plurality of different locations and defining imaging states of the images presented at the plurality of different locations, wherein the plurality of different locations respectively correspond to a same focal plane, comprises:
acquiring an image set with corresponding height, and marking the corresponding height;
defining a first image based on the set of images;
detecting a first image based on a plurality of different positions at the same height, wherein the plurality of different positions respectively correspond to the same focusing plane and are correspondingly provided with a photoelectric detector;
the imaging state of the first image is output according to the photodetectors at different positions.
5. The method of claim 4, wherein scanning the images presented at the plurality of different locations and defining imaging states of the images presented at the plurality of different locations, wherein the plurality of different locations respectively correspond to the same focal plane, further comprises:
defining a focusing plane and recording parameters of the focusing plane;
locating a plurality of probe points of a focal plane;
correlating based on the plurality of probe points;
and constructing a dynamic relation according to the parameters of the plurality of detection points and the focusing plane, and fixing the relative positions and the number of the plurality of detection points, and simultaneously positioning the traversal range of the image based on the relative positions among the plurality of detection points.
6. The method of claim 1, wherein selecting the preferred image of the current height according to the imaging status of the images presented at the plurality of different locations and performing the height adjustment based on the current height to obtain the preferred image of the different heights comprises:
acquiring imaging states of images presented by a plurality of different positions;
outputting a state level of the image according to the plurality of imaging states and the state learning model;
if the state level of the image is smaller than the preset state level, selecting a replacement image at the same height;
and further evaluating the state of the replacement image until the state level reaches a preset state level, taking the replacement image as a preferred image of the current height, and simultaneously, carrying out height adjustment based on the current height so as to obtain the preferred images of different heights.
7. The method of claim 6, wherein the synthesizing the three-dimensional model based on the preferred image of the current height and the preferred image of the different heights comprises:
acquiring a preferred image of a current height and preferred images of different heights;
sequentially placing a plurality of preferred images along the height order;
outputting a plurality of stereoscopic features based on the plurality of preferred images and the image feature model;
a three-dimensional model is synthesized based on the plurality of stereo features.
8. The method of claim 1, wherein the synthesizing the three-dimensional model based on the preferred image of the current height and the preferred image of the different heights further comprises:
traversing the three-dimensional model, and screening convex and concave parts of the three-dimensional model;
marking convex-concave parts of the three-dimensional model, and recording three-dimensional coordinates of the convex-concave parts;
defining a measuring height according to the three-dimensional coordinates of the convex and concave parts, and backtracking a corresponding image set based on the measuring height;
defining replaceable features based on at least three images in the image set, and replacing the replaceable features with the concave-convex parts of the three-dimensional model so as to smoothly process the three-dimensional model.
9. A control device for a confocal camera, comprising:
an output module for outputting an optical signal based on the white light camera when the color camera captures an image;
the shooting module is used for traversing the optical signals and shooting images presented by the current height at a plurality of different positions;
the definition module is used for scanning the images presented at a plurality of different positions and defining imaging states of the images presented at the plurality of different positions, wherein the plurality of different positions respectively correspond to the same focusing plane;
the evaluation module is used for selecting a preferred image with the current height according to the imaging states of the images presented at a plurality of different positions, and carrying out height adjustment based on the current height so as to acquire the preferred images with different heights;
and the synthesis module is used for synthesizing the three-dimensional model based on the preferred image of the current height and the preferred images of different heights.
10. A computer-readable medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the control method of a confocal camera according to any one of claims 1 to 8.
CN202410109393.3A 2024-01-25 2024-01-25 Control method, device and medium of confocal camera Pending CN117896629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410109393.3A CN117896629A (en) 2024-01-25 2024-01-25 Control method, device and medium of confocal camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410109393.3A CN117896629A (en) 2024-01-25 2024-01-25 Control method, device and medium of confocal camera

Publications (1)

Publication Number Publication Date
CN117896629A true CN117896629A (en) 2024-04-16

Family

ID=90648849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410109393.3A Pending CN117896629A (en) 2024-01-25 2024-01-25 Control method, device and medium of confocal camera

Country Status (1)

Country Link
CN (1) CN117896629A (en)

Similar Documents

Publication Publication Date Title
US10698308B2 (en) Ranging method, automatic focusing method and device
US20210250494A1 (en) Real time assessment of picture quality
JP6855587B2 (en) Devices and methods for acquiring distance information from a viewpoint
US8405742B2 (en) Processing images having different focus
US10430962B2 (en) Three-dimensional shape measuring apparatus, three-dimensional shape measuring method, and storage medium that calculate a three-dimensional shape of an object by capturing images of the object from a plurality of directions
WO2014171418A1 (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
US10362235B2 (en) Processing apparatus, processing system, image pickup apparatus, processing method, and storage medium
EP3499178B1 (en) Image processing system, image processing program, and image processing method
US20150302573A1 (en) Method for designing a passive single-channel imager capable of estimating depth of field
CN103297799A (en) Testing an optical characteristic of a camera component
JP2007322259A (en) Edge detecting method, apparatus and program
CN109883354B (en) Adjusting system and method for projection grating modeling
JP2011095131A (en) Image processing method
CN117896629A (en) Control method, device and medium of confocal camera
CN109900702A (en) Processing method, device, equipment, server and the system of vehicle damage detection
JP2011133360A (en) Distance measuring device, distance measurement method, and program
CN109813533B (en) Method and device for testing DOE diffraction efficiency and uniformity in batch
JP6939501B2 (en) Image processing system, image processing program, and image processing method
US11698342B2 (en) Method and system for analysing fluorospot assays
JP6969739B2 (en) Location information acquisition system, location information acquisition method and program
JP2010121955A (en) Height information acquisition device, height information acquisition method, and program
JP2015210396A (en) Aligment device, microscope system, alignment method and alignment program
JP2014142213A (en) Photographing parameter determination device and control method of the same
US11790600B2 (en) Image processing device, imaging apparatus, image processing method, and recording medium
US11997396B2 (en) Processing apparatus, processing system, image pickup apparatus, processing method, and memory medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination