CN109899711B - Lighting apparatus and robot camera - Google Patents
Lighting apparatus and robot camera Download PDFInfo
- Publication number
- CN109899711B CN109899711B CN201711311241.8A CN201711311241A CN109899711B CN 109899711 B CN109899711 B CN 109899711B CN 201711311241 A CN201711311241 A CN 201711311241A CN 109899711 B CN109899711 B CN 109899711B
- Authority
- CN
- China
- Prior art keywords
- illumination
- wing
- light
- lighting device
- equation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21S—NON-PORTABLE LIGHTING DEVICES; SYSTEMS THEREOF; VEHICLE LIGHTING DEVICES SPECIALLY ADAPTED FOR VEHICLE EXTERIORS
- F21S6/00—Lighting devices intended to be free-standing
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21S—NON-PORTABLE LIGHTING DEVICES; SYSTEMS THEREOF; VEHICLE LIGHTING DEVICES SPECIALLY ADAPTED FOR VEHICLE EXTERIORS
- F21S8/00—Lighting devices intended for fixed installation
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21V—FUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
- F21V19/00—Fastening of light sources or lamp holders
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21V—FUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
- F21V33/00—Structural combinations of lighting devices with other articles, not otherwise provided for
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21V—FUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
- F21V5/00—Refractors for light sources
- F21V5/04—Refractors for light sources of lens shape
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
Abstract
The embodiment of the application provides a lighting device and a robot camera. Wherein the lighting apparatus includes wing parts of at least three wings which are spatially uniformly arranged, a wing spreading mechanism, a light-emitting part on each wing, and a lens part which covers an outer side of the light-emitting part; a span-wise deployment mechanism coupled to the wing member, the span-wise deployment mechanism configured to cause deployment of the wing member; when the lighting device is in an operative state, the wing members are in a deployed state. By applying the scheme provided by the embodiment of the application, the shadow depth information in the target irradiation area can be increased, so that the space information aiming at the object in the target irradiation area is increased.
Description
Technical Field
The application relates to the technical field of lighting, in particular to lighting equipment and a robot camera.
Background
The illumination device may provide illumination for the target illumination area. The lighting device may be widely applied in various fields. The illumination device may be applied in enclosed spaces that need to be accessed from a small slit, for example, in intra-corporeal laparoscopes to illuminate a focal region within the abdominal cavity in minimally invasive surgery. When the lighting device is applied in such an environment, it is required that the size of the lighting device cannot be too large. Under such size requirements, the light emitting part in the lighting apparatus can be generally mounted only compactly on the main body to reduce the volume of the lighting apparatus.
Generally, the lighting device can enter a closed space from a narrow gap to provide illumination for the closed space. However, with such an illumination apparatus, shadow depth information within the target irradiation region is insufficient, which makes it impossible for human eyes to obtain sufficient spatial information for an object from the target irradiation region.
Disclosure of Invention
An object of an embodiment of the present application is to provide an illumination apparatus and a robot camera to increase shadow depth information within a target irradiation area to increase spatial information for an object in the target irradiation area.
In a first aspect, an embodiment of the present application provides an illumination apparatus, including: a wing part of at least three wings which are evenly arranged in space, a wing span-opening mechanism, a light-emitting part on each wing and a lens part which covers the outer side of the light-emitting part;
the span-wise opening mechanism is connected with the wing member, the span-wise opening mechanism being capable of causing the wing member to deploy; when the lighting device is in an operative state, the wing members are in a deployed state.
Optionally, the lighting device further comprises: a tilt movement mechanism; the tilting motion mechanism is capable of causing the lighting device to tilt.
Optionally, the lighting device further comprises: an anchor member; the anchoring component is used for anchoring the lighting device at a target position.
Optionally, the lens part maps the light emitted from the light emitting part on a target irradiation area in a prescribed mapping relationship;
the specified mapping relation is as follows: a mapping relation that the illumination uniformity of the target illumination area of the illumination device at a preset distance is not less than a preset uniformity threshold value and the illumination intensity is not less than a preset intensity threshold value is realized; the prescribed mapping relationship is determined based on a refractive index of the lens member, a prescribed volume of the lens member, a size of the light-emitting part, a light intensity distribution of the light-emitting part, and a relative position between the light-emitting part and the target irradiation area.
Optionally, the specified mapping relation is based on a surface gradientIs obtained byIs a solution of the following equation:
wherein the epsilon is a constant coefficient,ζ={(ξ,η)|ξ2+η2less than or equal to 1}, and the omega issThe xi and eta are respectively the abscissa and the ordinate of the projection plane where the luminous component is positioned, and the I is the light source domain of the luminous component0The light intensity distribution at the central axis of the light emitting component, BC being a boundary condition, EtAs a function of the preset illuminance distribution of the target illumination area, EtIs determined according to the preset uniformity threshold and the preset intensity threshold.
taking a first initial value as an illumination distribution function E of the target illumination areat;
Subjecting said E totSubstitution equation
Obtain a solution result u∈;
According to said u∈Determining a simulated illuminance distribution function for the target illumination area
If not, calculating a modified illumination distribution functionTaking the modified illumination distribution function as the illumination distribution function EtReturn to execute said EtSubstitution equationThe step (2).
Alternatively, the equation is obtained in the following manner
Solution result u of∈:
Taking the second initial value and the third initial value as the u∈And e is the value of;
subjecting said u to∈The values of sum e are all substituted into the equation
Carrying out numerical discretization on the equation after the values are substituted, and determining the solution u of the equation after the numerical discretization by adopting a numerical solver∈;
Judging whether the value of the epsilon is smaller than a preset minimum value or not, and if so, determining the solution u∈As a result of the solution of the equation; if not, updating u∈And e, returning to execute the said u∈The values of sum e are all substituted into the equationThe step (2).
In a second aspect, an embodiment of the present application provides a robot camera, including: the camera module and the lighting equipment provided by the embodiment of the application;
the camera module is fixed at the middle position of the wing part; when the wing part is in the expanded state, the camera module can collect images, and when the wing part is in the folded state, the camera module is in the inside of the wing part.
Optionally, the range of the target illumination area of the illumination device at the preset distance is not less than the range of the image acquisition area of the camera module at the preset distance.
The lighting device and the robot camera provided by the embodiment of the application comprise wing parts of at least three wings which are uniformly distributed in space, a wing spreading mechanism, a light-emitting part and a lens part, wherein the light-emitting part and the lens part are positioned on each wing; when the lighting device is in the active state, the wing members are in the deployed state. Because the wing parts of the lighting device can be unfolded and folded, the size of the lighting device in a folded state can be smaller, and the lighting device can enter a closed space from a narrow gap. When the lighting equipment is in a working state, the wing parts can be unfolded, so that the light emitting parts are uniformly distributed in a larger space, and the light sources distributed in the larger space can increase the shadow depth information of the target irradiation area, thereby increasing the space information aiming at the object in the target irradiation area. Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1a and 1b are schematic structural diagrams of a lighting device provided by an embodiment of the present application in an unfolded state and a folded state, respectively;
fig. 2a is a schematic structural diagram of an illumination device provided in the embodiment of the present application;
fig. 2b and 2c are two reference views corresponding to fig. 2 a;
fig. 3 is a diagram of a lighting device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a process for determining a surface gradient provided by an embodiment of the present application;
fig. 5a to 5d are reference diagrams for determining a specified mapping relationship according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of a robot camera according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of another robot camera provided in the embodiment of the present application;
fig. 8 to 15 are reference diagrams for evaluation and testing of the optical design of the lens provided in the embodiments of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the described embodiments are merely a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to increase shadow depth information within a target illumination area to increase spatial information for an object in the target illumination area, embodiments of the present application provide an illumination apparatus and a robot camera. The present application will be described in detail below with reference to specific examples.
Fig. 1a is a schematic structural diagram of a lighting device provided in an embodiment of the present application in an unfolded state, and fig. 1b is a schematic structural diagram of a lighting device provided in an embodiment of the present application in a folded state. In fig. 1a, the lighting device comprises: the wing part 101 of at least three wings arranged evenly in space, a wing-spreading mechanism 102, a light emitting part 103 on each wing and a lens part 104, the lens part 104 covering the outside of the light emitting part 103. In fig. 1b, the lighting device is in a folded state, with the three foldaway covering the light emitting part and the lens part inside.
The span-wise opening mechanism is connected with the wing member, the span-wise opening mechanism being capable of causing the wing member to deploy; when the lighting device is in an operative state, the wing members are in a deployed state. The span-extending mechanism 102 may be a motor or other device capable of providing a driving force.
As can be seen from the above, the wing member of the lighting device in the present embodiment includes at least three wings that are uniformly arranged in space, and the wing spreading mechanism can cause the wing member to spread, and when the lighting device is in the operating state, the wing member is in the spread state. Because the wing parts of the lighting device can be unfolded and folded, the size of the lighting device in a folded state can be smaller, and the lighting device can enter a closed space from a narrow gap. When the lighting equipment is in a working state, the wing parts can be unfolded, so that the light emitting parts are uniformly distributed in a larger space, and the light sources distributed in the larger space can increase the shadow depth information of the target irradiation area, thereby increasing the space information aiming at the object in the target irradiation area.
In another embodiment of the present application, on the basis of fig. 1a and 1b, the above-mentioned lighting device may further include: a tilt movement mechanism 105; the tilting movement mechanism 105 is capable of causing the lighting device to tilt, as shown in fig. 2 a. Fig. 2b and 2c are two reference views corresponding to fig. 2 a. The tilt mechanism 105 may be a motor or other device capable of providing a driving force.
In this embodiment, the pitch motion mechanism 105 and span opening mechanism 102 may be stepper motors, such as 4mm diameter, 14.42mm length, and 125:1 planetary gear head (model number ZWBMD004004-125) stepper motors may be selected. The stepper motor can provide 10mNm of torque when operated continuously. The worm and gear set for the pitch and span mechanisms may have a reduction ratio of 12:1 and 20:1, respectively.
In another embodiment of the present application, in fig. 2a, the lighting device may further include: an anchor member 106; an anchoring member 106 for anchoring the lighting device at a target location. The anchoring member may be a magnetic device.
In another embodiment of the application, in fig. 2a, the lighting device may further comprise two worm and gear sets 107 and 108, the first worm and gear set 107 for connecting the tilt movement mechanism 105 with the anchoring member 106, the second worm and gear set 108 for connecting the span opening mechanism 102 with the wing member 101. When the wing member 51 includes three wings, the worm and gear set 562 may include one worm and three gears respectively connected to the three wings.
Specifically, a first worm and gear set 107 may be coupled to the tilt mechanism 105, and a first worm and gear set 107 may be coupled to the anchor member 106. Under the driving of the tilting mechanism 105, the first worm and the worm in the gear set 107 drive the gear to rotate, so that the lighting device and the anchoring part 106 form a certain included angle.
A second worm of the worm and gear set 108 may be connected to the wing span mechanism 102 and a gear of the second worm and gear set 108 may be connected to the wing member 101. Under the drive of the wing spreading mechanism 102, the second worm and the worm in the gear set 108 drive the gear to rotate, so that the wing part 101 is spread or folded.
Fig. 3 is an illustration of the lighting device in a folded state and in an unfolded state. The object figure includes the anchoring member 106 and the lighting device, three wing members 101 of the lighting device being visible when the lighting device is in a folded state. When the lighting device is in the unfolded state, the three wing parts are seen to have distributed thereon a light emitting part 103 and a lens part 104.
In order to improve the light efficiency and light uniformity of the target irradiation area, in another embodiment of the present application, the lens part may map the light emitted from the light emitting part 103 on the target irradiation area in a prescribed mapping relationship. Here, the mapping may also be understood as projection or illumination, i.e., the lens part 104 may cause the light emitted by the light emitting part 103 to be projected or illuminated on the target illumination area in accordance with a specified mapping relationship.
The specified mapping relation is as follows: a mapping relation that the illumination uniformity of a target illumination area of the illumination device at a preset distance is not less than a preset uniformity threshold value and the illumination intensity is not less than a preset intensity threshold value is realized; the prescribed mapping relationship is determined based on a refractive index of the lens member, a prescribed volume of the lens member, a size of the light-emitting part, a light intensity distribution of the light-emitting part, and a relative position between the light-emitting part and the target irradiation area.
The light path of the light sent by the light emitting component is changed after the light passes through the lens, and the light irradiates the target irradiation area according to the specified mapping relation, so that the target irradiation area has certain off-illumination uniformity and illumination intensity, and reliable and stable illumination is provided for minimally invasive surgery.
The above-specified mapping relationship may be understood as a mapping relationship determined by the lens. The specified mapping relationship may be based on a surface gradientThus obtaining the product. In particular, it can be based on surface gradientsA surface shape function of a lens is constructed that has a specified mapping relationship between light projected from the lens and light emitted by the light emitting component when the light emitted by the light emitting component passes through the lens.
Surface gradientWhich may be understood as the surface gradient of the lens. Wherein the content of the first and second substances,is a solution of the following equation:
wherein e is a constant coefficient and is used for assisting in calculating the solution of the equation.EsZeta { (xi, eta) | xi) as an illuminance distribution function of the light emitting part2+η2Is less than or equal to 1, and zeta is the calculation domain of the illumination of the light emitting component. OmegasBeing the light source domain of the light emitting component, ξ and η are the abscissa and the ordinate, respectively, of the projection plane ξ - η on which the light emitting component is located. I is0The light intensity distribution at the central axis of the light emitting part, that is, the light intensity distribution at the polar angle of the light emitting part of 0 degree. BC is a boundary condition. EtAs a function of the preset illuminance distribution of the target illumination area, EtIs determined according to a preset uniformity threshold and a preset intensity threshold.
The surface gradient can be determined using the steps of the flow diagram shown in fig. 4:
step S401: taking the first initial value as the illuminance distribution function E of the target irradiation areat;
Step S402: subjecting said E totSubstitution equation
Obtain a solution result u∈;
Step S403: according to said u∈Determining a simulated illuminance distribution function for the target illumination area
In this step, can be according to u∈Determining surface gradient, determining surface shape function of lens according to the determined surface gradient, and determining illumination component obtained after light is acted by the surface shape function of lens according to known illumination distribution function of light emitting componentAs a function of the target irradiation area
Step S404: judging thatAnd said EtIf the difference is smaller than the preset value, step S405 is executed, and if not, step S406 is executed.
Wherein the content of the first and second substances,and EtThe difference between them may beAnd EtThe difference between them can also beAnd EtThe variance between. The preset value is a preset value.
Step S406: calculating a modified illumination distribution functionTaking the modified illumination distribution function as the illumination distribution function EtReturns to execute step S402.
In one embodiment, step S402 can be performed in the following manner:
step 1: taking the second initial value and the third initial value as the u∈And e, the value of.
Wherein the second initial value is a solution to the guessed equation. E can be taken in a preset sequence of decreasing constantsThe value may be, for example, 1, 10-1,10-2And so on.
Step 2: subjecting said u to∈The values of sum e are all substituted into the equation
And step 3: carrying out numerical discretization on the equation after the values are substituted, and determining the solution u of the equation after the numerical discretization by adopting a numerical solver∈;
The discretization of numerical values and the solver of numerical values are common methods for solving equations, and will not be described in detail here.
And 4, step 4: judging whether the value of the epsilon is smaller than a preset minimum value or not, and if so, determining the solution u∈As a result of the solution of the equation; if not, updating u∈And e, return to perform step 2.
In updating u∈Then, the solution u determined in step 3 may be applied∈As updated u∈. Can be based on the solved u∈With substituted u∈Determining the value of epsilon in the constant sequence.
The derivation of the above formula is described in detail below.
Let Es(xi, eta) and Et(x, y) represent the luminous component, i.e., the LED source irradiance distribution, and the specified target radiation distribution, respectively. As shown in FIG. 5a, the objective of the present application is to find a ray mapping functionIrradiance EsConversion to EtWhere ζ ═ (ξ, η) andis the source domain omegasAnd a target domain omegatConstrained cartesian coordinates. The above equation is considered to be L2A special case of the Monge-Kantorovich problem. Assuming no transmission energy loss, phi should be satisfied
Brenier's theorem indicates that the L2Monge-Kantorovich problem has a unique solutionThe L2Monge-Kantorovich problem can be characterized as a gradient of the convexityInstead of in formula (2)We can see that u is the solution of the standard Monge-Ampere equation:
it is observed that weak solutions of low-order nonlinear partial differential equations can be approximated by sequences of high-order quasi-linear partial differential equations. In order to approximate the solution of the standard Monge-Ampere equation, which is a second order nonlinear partial differential equation, a double harmonic operator with a fourth order partial derivative is a good choice.
The approximate solution of equation (3) can thus be calculated from:
wherein e>0, e.g.The existence of the limit of the fruit exists,is a weak solution. OmegasShould satisfy formula (4). OmegasIs limited byThe point on should map to ΩtIs limited byThe above.
Wherein f isThe mathematical expression of (1). Combining equations (4) and (5), the ray map for designing the free lens can be calculated from the following quasi-linear PDE and Neumann boundary conditions
Computing a ray map from equation (6)There is a need for an efficient numerical method, which is described in detail in this section. The above-mentioned steps 1 to 4 give the calculation steps for solving the formula (6). The main idea of the proposed numerical method is to iteratively approximate u by updating e in each iteration∈. In particular, e is set to a sequence of decreasing constant values, e.g. 1, 10-1,10-2And the like. In each iteration, u is initialized∈First from the output u of the last iteration∈Provided or given manually (in the first iteration). The number of iterations depends on the number of e in the sequence. We can start the iteration with e ═ 1, resulting in u∈This is the solution of equation (3). When ∈ → 0+And formula (4) is equal to formula (3). But this does not mean that we can find the best approximate solution u when e is set to 0 in the iterative process∈。
wherein u is∈Expression (6) has a numerical solution of the grid size h. The final value of ∈ in equation (6) is related to h for achieving optimized convergence speed and minimizing errors. This relationship depends on the norm used. According to the experimental data obtained in the application, when e ∈ h,the smallest global error can be obtained.
For numerical discretization of equation (6), the quasi-linear partial differential equation and the boundary condition BC are re-expressed as:
discretization of the first and second partial derivatives in equation (8) at ΩsThe inner region adopts a central finite difference method and the boundary regionA forward/backward finite difference method with a second order correction error is employed. Discretization Delta of double harmonic term in equation (8)2u∈Can be formed by thirteen-point templatesExpression (a)
Wherein (xi) isi,ηj) Abbreviated as (i, j). However, when the critical point is discretized by using the thirteen-point template in the formula (9), undefined points are introduced. FIG. 5b shows a thirteen point template located in a critical regionExample of (c). In this case, it is preferable that the air conditioner,andin the source region omegasAnd (c) out. Undefined or not definedAndthe approximation of (d) can be calculated by the following formula:
wherein the content of the first and second substances,a grid median threshold value; h is the grid size of the two directions xi and η;is omegasThe first partial differential above, which can be determined by the boundary condition in equation (8). The numerical discretization of equation (8) yields a set of non-linear equations that can be expressed in the form
F(U∈)=0 (11)
Wherein, U∈Representing variable u∈The vector of (2). Newton method is selected as numerical solver to calculate output u∈. Then, in the current iteration, e and eminH, if e is larger than>h, then the initial value u∈And e is calculated as U∈And smaller e updates. If the epsilon is less than or equal to h, solving the numerical value in the current iteration to be U∈The gradient of (2) serves as the final surface gradient.
The ray mapping method proposed above requires the use of the irradiance distribution E of the light source LEDs(ξ, η). However, high power LEDs, which are generally considered to be lambertian light sources, have a luminous intensity distribution in the hemispherical space of I ═ I0cosθ(lmsr-1) Definition, where θ denotes the polar angle of a ray, I0The light emission intensity when θ is 0 ° is represented. The present embodiment applies a stereographic projection method to convert the light intensity of a light source into an irradiance distribution defined on a plane. The main idea of the method is to change the transmission direction SP to xu,yu,zuThe light energy of (c) is mapped to the projected coordinates ζ ═ (ξ, η) on the ξ - η plane, as shown in fig. 5 c. Irradiance E in the xi-eta planesIn the final form of
Wherein ξ2+η2Less than or equal to 1. For grid point xi2+η2Not less than 1, we define Es(ξ,η)=0。
Based on the calculated ray mapping, at ΣL{xL,yL,zLEvery pair of coordinates (ξ) in the spacei,ηj) Can be mapped to sigma on the target planeG{xG,yG,zGPoint T 'in space'i,j=(x′i,y′j,z′(xi,yj) Where i and j represent the discretization index of the illuminant. According to sigmaGSum ΣLA rotation matrix ofR and translation vector T, T'i,jCan be composed ofLT in (1)i,jThis is shown in FIG. 5d (2). ByRepresenting a unit incident ray vector from a light source, in whichAndis (xi)i,ηj) As a function of (c). The present embodiment designs the initial optical surface of the light source using an easy-to-implement surface construction method. The main idea of the method is to first construct a vector with point p1,1,…,p1,nThe sequence of (2) is shown in FIG. 5d (1) -r. The generated curve is then used to calculate surface points along the directions in fig. 5d (1) -c.
As shown in FIG. 5d (1), define Oi,jAs the unit outgoing ray from the optical surface, and it is formulated as:
wherein p isi,jRepresenting a point to be built on a surface. In fig. 5d (1) -r, the initial point p may be manually selected according to the required lens volume, taking into account the desired lens volume1,1. Thus, O1,1Is calculated by the formula (13). At pi,jThe normal vector of (a) can be calculated from Snell's law:
wherein n is0Representing the refractive index of the medium surrounding the lens, n1Representing the refractive index of the lens. Next point p on the curve1,2Is calculated as ray I1,2And is formed from p1,1And N1,2The intersection between the defined planes. After obtaining the point on the first curve in fig. 5d (1) -r, the point of the curve of direction (c) may be calculated by using the point on the first curve as an initial point.
After the free-form surface having the desired lens volume is constructed using the above method, it cannot be guaranteed to be p due to accumulated errorsi,jTo the calculated normal vector Ni,jFor pi,jAdjacent thereto point pi+1,j,pi,j+1The vector between is constant as shown in fig. 5d (2). To address this problem and improve illumination performance, the present application introduces an iterative optimization technique to correct the constructed initial surface to better fit the normal vectors. Theoretically, if the surface mesh is small enough, the surface points pi,jAnd the normal vector N at that pointi,jThe following constraints should be satisfied:
(pi+1,j-pi,j)·Ni,j=0 (15)
(pi,j+1-pi,j)·Ni,j=0 (16)
suppose we represent a plane with N points. P in the formulae (15) and (16)i,jIs replaced by rhoi,jIi,jObtaining N constraints F1,…FN:
Fk(ρ)=||(ρi+1,jIi+1,j-ρi,jIi,j)·Ni,j||+||(ρi,j+1Ii,j+1-ρi,jIi,j)·Nij||=0, (17)
Where k is 1, 2, …, N, ρi,jDenotes S and surface point pi,jThe distance between them. Minimizing F using non-linear least squares1(ρ)2+…+FN(ρ)2Where ρ isi,jAs variables. Updated normal vector Ni,jCalculated according to equation (14) by using p and the ray map optimized for the current iteration. Iterating to calculate a new ρ until the calculated surface point satisfies the convergence condition | ρt-ρt-1‖<δ, where t represents the current number of iterations, δ being the stop condition. Finally, theOptical surfaces can be represented by using free surface points with Non-Uniform Rational Basis splines (NURBS).
With the point light source assumption, using extended size LEDs, the luminance uniformity may be reduced, especially if a small volume optical lens is designed. This problem can be alleviated by employing a feedback correction method. By using Et(x, y) represents a desired illuminance distribution of the target area,showing the results of a simulation of the illumination distribution after application of the free lens. Illuminance distribution corrected after next iterationCan be defined as
In each iteration it is detected whether the illumination performance has reached a satisfactory illumination uniformity. If so, the free optical lens design is complete. Otherwise, the next iteration will be performed to correct the surface of the free lens.
Fig. 6 is a schematic structural diagram of a robot camera according to an embodiment of the present application. The robot camera includes: the camera module 601 and any one of the lighting devices 602 provided by the embodiments of the present application;
the camera module is fixed in the middle of the wing part; when the wing part is in the expanded state, the camera module can collect images, and when the wing part is in the folded state, the camera module is in the inside of the wing part. The camera module 601 may include sub-components such as an imaging sensor and a lens.
In the related art, the coaxial arrangement of the imaging sensor and the light source may cause a lack of shadow depth clues in the output two-dimensional image, which may result in insufficient depth information and position information in the image acquired by the camera module. The coaxial configuration is a configuration mode that the middle axis of the imaging sensor is parallel to the middle axis of the light source.
The image sensor and the light source can be in a non-coaxial configuration mode by fixing the camera module at the middle position or other positions of the wing part. The non-coaxial configuration is a configuration mode that a middle axis of the imaging sensor and a middle axis of the light source are not parallel. The non-coaxial configuration mode can enable the camera module to acquire more shadow depth information, and further enable the robot camera in the embodiment to acquire images with better depth information.
In summary, in the present embodiment, the lighting device comprises a wing part comprising at least three wings arranged spatially uniformly, the light emitting part and the lens part being located on each wing. Therefore, no matter where the camera module is fixed on the lighting equipment, the different-axis configuration of the imaging sensor and the light source can be ensured, and the shadow depth information in the image can be increased.
Fig. 7 is a schematic structural diagram of a robot camera according to an embodiment of the present disclosure. This figure includes the anchor component 106, the first worm and gear set 107 connected to the tilt motion mechanism 105, the second worm and gear set 107 connected to the wing spread mechanism, the wing component 101, and the light emitting component 103 and lens component 104 on the wing component 101. The camera module 601 is located at the middle position of the wing member.
In this embodiment, the light emitted from the light emitting part is bent after passing through the lens part, and finally irradiated on the target irradiation area. In a specific embodiment, the range of the target illumination area of the lighting device at the preset distance is not less than the range of the image acquisition area of the camera module at the preset distance. Therefore, the images collected by the camera module can be contained in the target irradiation area, and the imaging quality of the images is better.
In this application, the applicant evaluated the performance of the lens design method in laparoscopy. Fig. 8(a) and (b) show an on-axis experiment and an off-axis experiment, which respectively perform the effectiveness of the optical design method in different application scenarios by using optical design software. Polymethyl methacrylate (PMMA) with a refractive index of 1.49 is used as a lens material, and a Nichia NCSWE17A type LED with 118lm luminous flux is used as a light source. To verify that the methods provided by the embodiments of the present application are flexible, enabling the design of free optical lenses for different patterns of target illumination areas, the applicant set the target illumination areas for circular and square patterns in an on-axis illumination test. The specification is shown in Table 1.
Table 1 evaluation criteria of free-form surface optical design method
And calculating the ray mapping. First, the light intensity distribution of the LED (fig. 8(c)) is converted into a normalized illuminance distribution (fig. 8 (d)). The computational domain of the LED, ξ ∈ [ -1,1], η ∈ [ -1,1] is discretized by a 81 × 81 grid. According to the ray mapping algorithm, the minimum value of e is determined to be 0.025 when the grid size h is 0.025. The present application selects a sequence of 1, 0.5, 0.025 for epsilon to approximate the numerical solution of ray mapping. In order to verify the effectiveness of the ray mapping relationship generation method in the embodiment, the intermediate ray mapping result calculated by taking 1, 0.5 and 0.025 from epsilon is demonstrated. The ray map calculated using e 0.025 was used to generate the initial surface of the free optical lens of the LED.
FIG. 8 is a simulation apparatus for evaluating a free optical design method. (a) And (3) testing on a shaft: the LED axis is overlapped with the axis of the target irradiation area, and the target irradiation area is circular or square in the test; (b) off-axis testing: the offset Δ d between the axis of the LED and the axis of the target illumination area is 5mm, 10mm and 15 mm. In this test, only a circular target illumination area was used; (3) obtaining LED light intensity distribution from an LED data table; (d) the method is used to convert the LED illuminance distribution.
Fig. 9 shows the on-axis light mapping relationships calculated for the circular and square target irradiation areas, respectively, where ∈ is 1, 0.5, and 0.025, and an 81 × 81 grid is used. For clear visualization purposes, a 61 × 61 grid is inserted in this figure.
FIG. 10 shows a ray mapping relation generatorThe convergence rate of the method. The convergence rate is characterized by adopting | | F | | non-woven phosphor in the formula (11)2Residual value and iteration number of the representation. The remaining value | | F | | non-woven phosphor of formula (11)2The units are millimeters. It is contemplated that the free-form optical lens may be on the sub-micron scale (10)-4mm), the convergence threshold can be set conservatively on the order of nanometers (10)-7mm). In all experiments, | F | non-woven phosphor2May reach 10 after 10 iterations-7The value of (c). In fig. 10, (a) - (c) and (d) - (f) show the convergence rates in the case of a circular area and a square area when epsilon is 1, 0.5, and 0.025, respectively.
On-axis testing of free-form optical lens designs. Fig. 10(a) shows a simulated setup of an axial test of a free-form optical lens design. An on-axis test was performed using a circular target irradiation area with a radius R of 80mm and a square target irradiation area with a side length 2R of 160 mm. The illumination distance from the LED to the center of the target irradiation area is set to D100 mm. Fig. 11(a) and (b) show the design lens profile with the mark size. Fig. 13(c) and (d) show simulated illuminance distributions on the target irradiation area. The optical efficiency of the free-form lens is 88.3% and 90.5%, respectively, taking into account fresnel loss. Illuminance Uniformity (Uniformity) can be calculated by equation (19)
Where σ and μ are the standard deviation and mean of the collected illumination data. Table 2 details the optical properties measured on-axis.
TABLE 2 optical Properties measured on-axis
FIG. 11 is an on-axis free-form lens design for two different illumination patterns. (a) And (b) shows the lens profiles of the circular and square regions, respectively. (c) And (d) show illuminance uniformity performed on the target plane by (a) and (b), respectively.
Off-axis testing of free-form optical lens designs. Fig. 8(b) illustrates a simulated setup for off-axis testing. The illumination area is set as a circular area with a radius R of 80 mm. The distance from the LED to the target plane is set to D100 mm. The axial offset Δ d is 5mm, with 10mm and 15mm being introduced to evaluate the best performance when the LED5S axis and the target illumination area S5S do not coincide. To construct a free-form optical surface under this more generalized scenario, a transformation matrix is required to transform the ray diagram from global coordinates to local coordinates of the LEDs. Fig. 12 shows the designed lens profile and the simulated illuminance distribution results for each case. Due to the axis offset, the optical lens is no longer symmetrical. Accordingly, the present embodiment provides front and side views of the lens, as shown in fig. 12(a), (d) and (g). Fig. 12(b), (e) and (h) show simulated illuminance distributions of the circular target irradiation area. The optical efficiency of the free-form lens, taking into account fresnel losses, was 88.06%, 87.74% and 88.15%, respectively. Fig. 12(c), (f) and (i) show illuminance uniformity in the horizontal and vertical directions in the illumination area. The optical performance of the off-axis test is summarized in table 3.
TABLE 3 optical Properties of the off-axis test
Final design of LED free optical lens. Referring to the configuration of the lighting apparatus provided in fig. 13, the lens mounting position L on the wing is set to 20.5 mm. For the extended mode, the opening angle of the wing is set to β 80 °. In the design, a lens volume with a maximum radial length pmax of 5.4 mm is provided to ensure that three lenses can fit into the robot camera. The initial illumination distance D was set to 100 mm. The radius of the target circular region R is set to 80 mm. Table 4 summarizes the specifications of the free optical lens design of the laparoscopic illumination device.
Table 4 specification of lighting device settings
Fig. 13 shows a three-dimensional (3D) design of the laparoscopic illumination device. Fig. 13(a) shows three views of the free-form surface. Fig. 13(b) shows the compactness of the lens satisfying the lens volume limitation. Fig. 13(c) shows the integration of the lens and LED in one airfoil. Fig. 13(D) shows a 3D structure of the assembled laparoscopic illumination apparatus.
Illumination performance of the target illumination area. The performance of the developed lighting device was evaluated according to the simulation settings in table 4. Due to the symmetrical arrangement of the three wings, the single LED is first energized, emitting light through its free-form lens. Fig. 14(a) shows the illuminance distribution on the target irradiation area. Considering the fresnel losses, the optical efficiency of the designed free-form lens is 89.45%, which means that 105.55lm of light, each of a total of 118 lumens luminous flux, is successfully projected to the desired target illumination area. The average illumination provided by a single LED is 5473.8 lx. According to equation (19), the horizontal and vertical illuminance uniformity is 95.87% and 94.78%, respectively, as shown in fig. 14 (b).
Fig. 14(c) shows the illuminance distribution on the target irradiation area when all the LEDs are powered. In this case, the total luminous flux provided by the lighting device is 354 lumens, while the total luminous flux falling on the target illumination area is 316.58 lumens, with an optical efficiency of 89.43%. The average illuminance of the target irradiation area is 12,441 lx. Fig. 14(d) shows that the illuminance units in the horizontal and vertical directions are 96.33% and 96.79%, respectively. Fig. 14(e) shows the illuminance distribution of the target irradiation area having the 3D profile. The evaluation results of the illumination performance are summarized in table 6. It can be clearly seen that the laparoscopic illumination device developed in the embodiment of the present application satisfies all the design requirements in table 6.
TABLE 6 design requirements of laparoscopic lighting devices
The light beam is focused. In MIS, the distance D between the camera and the target surgical area may be less than 100mm after the intracorporeal laparoscopic system is inserted into the abdominal cavity. Although the wings of the luminaire can still provide good illumination in this area at an angle β of 80 degrees, the uniformity of illumination is reduced and more energy is wasted outside the FOV.
The in vivo laparoscope lighting equipment provided by the embodiment of the application has a refocusing function, and can uniformly light a target irradiation area when the distance from the camera module to a target changes by adjusting the angle of the wing, so that light beams are controlled. In fig. 15(a), the desired target irradiation region D may be set to 60 mm. When the angle β of the wing is set to 80 °, the illuminated area is covered with yellow lines. This β value is most suitable for D100 mm. To refocus the light of the target illumination area when D is 60mm, the span angle is reduced from β to β - Δ β, the value of Δ β can be determined by using the angle θ between the green and yellow dashed arrows. According to the geometry of this setup, θ is calculated as 6 °. Similarly, to illuminate the target illumination area with D equal to 80mm, the angle of the wing should be decreased by θ equal to 3 ° from the initial angle β equal to 80 °.
Fig. 15(b) - (e) show the illuminance distribution of the light beam by refocusing the object plane when D is 60mm and D is 80 mm. In the case of fig. 15(b) and (c), it is set to 74 °. The average illuminance of a circular area with radius R of 48mm was calculated as 45823 lx. The optical efficiency is about 92% taking into account the fresnel losses. The uniformity of the illuminance in the horizontal and vertical directions was 98.29% and 98.22%, respectively. In the case of fig. 15(D) and (e), β is set to 77 ° to irradiate a target irradiation region with D being 80 mm. The average illumination of a circular area with a radius R of 64 mm was calculated as 24172 lx. The optical efficiency was 90.9% considering the fresnel loss. The horizontal and vertical illuminance uniformity was 95.37% and 95.98%, respectively. The illumination performance of the refocused beam is summarized in table 7.
TABLE 7 illumination Performance of light refocusing test
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, they are described in a relatively simple manner, and reference may be made to some descriptions of method embodiments for relevant points.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.
Claims (7)
1. An illumination device, comprising: a wing part of at least three wings which are evenly arranged in space, a wing span-opening mechanism, a light-emitting part on each wing and a lens part which covers the outer side of the light-emitting part;
the span-wise opening mechanism is connected with the wing member, the span-wise opening mechanism being capable of causing the wing member to deploy; when the lighting device is in an operating state, the wing parts are in a spread state;
The lens part maps the light emitted by the light emitting part on a target irradiation area according to a specified mapping relation; the specified mapping relation is as follows: a mapping relation that the illumination uniformity of the target illumination area of the illumination device at a preset distance is not less than a preset uniformity threshold value and the illumination intensity is not less than a preset intensity threshold value is realized; the prescribed mapping relationship is determined according to a refractive index of the lens member, a prescribed volume of the lens member, a size of the light-emitting part, a light intensity distribution of the light-emitting part, a relative position between the light-emitting part and the target irradiation area;
the specified mapping relation is based on the surface gradientIs obtained byIs a solution of the following equation:
wherein E is a constant coefficientsAs a function of the illuminance distribution of the light-emitting part, ζ is the calculation field of the illumination of the light emitting component,ζ={(ξ,η)|ξ2+η2Less than or equal to 1}, and the omega issThe xi and eta are respectively the abscissa and the ordinate of the projection plane where the luminous component is positioned, and the I is the light source domain of the luminous component0The light intensity distribution at the central axis of the light emitting component, BC being a boundary condition, EtAs a function of the preset illuminance distribution of the target illumination area, EtIs determined according to the preset uniformity threshold and the preset intensity threshold.
2. A lighting device as recited in claim 1, further comprising: a tilt movement mechanism; the tilting motion mechanism is capable of causing the lighting device to tilt.
3. A lighting device as recited in claim 2, further comprising: an anchor member; the anchoring component is used for anchoring the lighting device at a target position.
4. The illumination device as recited in claim 1, wherein the surface gradientThe following method is adopted for determination:
taking a first initial value as an illumination distribution function E of the target illumination areat;
Subjecting said E totSubstitution equation
Obtain a solution result u∈;
According to said u∈Determining a simulated illuminance distribution function for the target illumination area
5. Lighting device according to claim 4, in which the equation is obtained in the following way
Solution result u of∈:
Taking the second initial value and the third initial value as the u∈And e is the value of;
subjecting said u to∈The values of sum e are all substituted into the equation
Carrying out numerical discretization on the equation after the values are substituted, and determining the square after the numerical discretization by adopting a numerical solverSolution of the equation u∈;
6. A robot camera, comprising: a camera module and a lighting device according to any one of claims 1 to 5;
the camera module is fixed at the middle position of the wing part; when the wing part is in the expanded state, the camera module can collect images, and when the wing part is in the folded state, the camera module is in the inside of the wing part.
7. The robot camera according to claim 6, wherein a range of a target irradiation area of the illumination apparatus at a preset distance is not smaller than a range of an image capturing area of the camera module at the preset distance.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711311241.8A CN109899711B (en) | 2017-12-11 | 2017-12-11 | Lighting apparatus and robot camera |
PCT/CN2018/119610 WO2019114607A1 (en) | 2017-12-11 | 2018-12-06 | Lighting apparatus and robot camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711311241.8A CN109899711B (en) | 2017-12-11 | 2017-12-11 | Lighting apparatus and robot camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109899711A CN109899711A (en) | 2019-06-18 |
CN109899711B true CN109899711B (en) | 2021-04-02 |
Family
ID=66818966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711311241.8A Active CN109899711B (en) | 2017-12-11 | 2017-12-11 | Lighting apparatus and robot camera |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109899711B (en) |
WO (1) | WO2019114607A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2624817Y (en) * | 2003-07-19 | 2004-07-14 | 黄长征 | Self-examining equipment for human body cavities |
CN101043842A (en) * | 2004-11-29 | 2007-09-26 | 奥林巴斯株式会社 | Body insertable apparatus |
CN101052341A (en) * | 2004-09-03 | 2007-10-10 | 斯特赖克Gi有限公司 | Optical head for endoscope |
CN103989451A (en) * | 2013-02-14 | 2014-08-20 | 索尼公司 | Endoscope and endoscope apparatus |
CN204379240U (en) * | 2014-12-22 | 2015-06-10 | 谢辉 | A kind of peritoneoscope |
JP2017209235A (en) * | 2016-05-24 | 2017-11-30 | オリンパス株式会社 | Endoscope |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8516691B2 (en) * | 2009-06-24 | 2013-08-27 | Given Imaging Ltd. | Method of assembly of an in vivo imaging device with a flexible circuit board |
CN105276394A (en) * | 2014-06-04 | 2016-01-27 | 常州超闪摄影器材有限公司 | Photographic lamp |
-
2017
- 2017-12-11 CN CN201711311241.8A patent/CN109899711B/en active Active
-
2018
- 2018-12-06 WO PCT/CN2018/119610 patent/WO2019114607A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2624817Y (en) * | 2003-07-19 | 2004-07-14 | 黄长征 | Self-examining equipment for human body cavities |
CN101052341A (en) * | 2004-09-03 | 2007-10-10 | 斯特赖克Gi有限公司 | Optical head for endoscope |
CN101043842A (en) * | 2004-11-29 | 2007-09-26 | 奥林巴斯株式会社 | Body insertable apparatus |
CN103989451A (en) * | 2013-02-14 | 2014-08-20 | 索尼公司 | Endoscope and endoscope apparatus |
CN204379240U (en) * | 2014-12-22 | 2015-06-10 | 谢辉 | A kind of peritoneoscope |
JP2017209235A (en) * | 2016-05-24 | 2017-11-30 | オリンパス株式会社 | Endoscope |
Also Published As
Publication number | Publication date |
---|---|
CN109899711A (en) | 2019-06-18 |
WO2019114607A1 (en) | 2019-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111164375B (en) | Apparatus for generating a sharpening shadow | |
CN108471487A (en) | Generate the image device and associated picture device of panoramic range image | |
ES2463675T3 (en) | Systems and methods for obtaining images using absorption | |
Castro et al. | A wireless robot for networked laparoscopy | |
CN106023082B (en) | Dispersive medium optical parameter field reconstructing device and its method for reconstructing based on microlens array and pulse laser | |
CN110992431B (en) | Combined three-dimensional reconstruction method for binocular endoscope soft tissue image | |
JP6896649B2 (en) | Ambient light suppression using color space information to derive pixel-by-pixel attenuation factors | |
ES2306236T3 (en) | DEVICE AND PROCEDURE TO REPRESENT THE DIRECTION OF ACTION OF A WORK ENVIRONMENT. | |
CN106777976B (en) | Radiotherapy robot tumor motion estimation prediction system and method based on particle filtering | |
CN112601483A (en) | Endoscope defogging | |
US10105049B2 (en) | Methods and apparatus for anterior segment ocular imaging | |
CN109899711B (en) | Lighting apparatus and robot camera | |
Liu et al. | Transformable in vivo robotic laparoscopic camera with optimized illumination system for single-port access surgery: Initial prototype | |
Zhao et al. | Laser scanner for 3D reconstruction of a wound’s edge and topology | |
CN101504317B (en) | Apparatus for simple detection of infrared imaging system performance parameter | |
CN109893078B (en) | Laparoscope | |
Le et al. | Experimental assessment of a 3-D plenoptic endoscopic imaging system | |
Liu et al. | Optical design of an in vivo laparoscopic lighting system | |
CN107993199A (en) | The image captured using capsule cameras goes artifact | |
CN208598361U (en) | A kind of laparoscope | |
Aoki et al. | Proposal on 3-D endoscope by using grid-based active stereo | |
WO2019114606A1 (en) | Lens, and method and device for determining surface shape of lens | |
Rodríguez-Vidal et al. | Optical performance of a versatile illumination system for high divergence LED sources | |
Hu et al. | Three-dimensional illumination procedure for photodynamic therapy of dermatology | |
CN210604404U (en) | Optical system for high-temperature test and high-temperature test system applying same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |