US10565462B2 - Electronic apparatus, method of controlling electronic apparatus, and program - Google Patents
Electronic apparatus, method of controlling electronic apparatus, and program Download PDFInfo
- Publication number
- US10565462B2 US10565462B2 US15/739,248 US201715739248A US10565462B2 US 10565462 B2 US10565462 B2 US 10565462B2 US 201715739248 A US201715739248 A US 201715739248A US 10565462 B2 US10565462 B2 US 10565462B2
- Authority
- US
- United States
- Prior art keywords
- section
- electronic apparatus
- light
- shape
- irradiation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G06K9/3208—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
- G01S17/36—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4814—Constructional features, e.g. arrangements of optical elements of transmitters alone
-
- G06K9/2027—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/145—Illumination specially adapted for pattern recognition, e.g. using gratings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- G06K9/3258—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
Definitions
- the present technology relates to an electronic apparatus, a method of controlling the electronic apparatus, and a program for causing a computer to execute the method.
- the present technology relates to an electronic apparatus that performs character recognition, a method of controlling the electronic apparatus, and a program for causing a computer to execute the method.
- OCR optical character recognition
- Patent Literature 1 Japanese Patent Application Laid-open No. 2003-323693
- the present technology has been made in view of the circumstances as described above, and it is an object of the present technology to accurately recognize characters in an image in an electronic apparatus that captures images.
- an electronic apparatus a method of controlling the electronic apparatus, and a program for causing a computer to execute the method
- the electronic apparatus including: an imaging section that images an object and captures image data; a distance measurement section that measures distances from the imaging section to a plurality of measurement points on a surface of the object; a shape estimation section that estimates a shape of the object from the measured distances; and a coordinate conversion section that performs coordinate conversion on the image data, the coordinate conversion including converting three-dimensional coordinates on the surface of the object into plane coordinates on a predetermined reference plane on the basis of the estimated shape. This produces an effect that the three-dimensional coordinates on the surface of the object are converted into the plane coordinates on the reference plane.
- the electronic apparatus may further include a character recognition section that recognizes a character on the surface of the object in the image data that has been subjected to the coordinate conversion. This produces an effect that the character is recognized.
- the electronic apparatus may further include an irradiation section that performs irradiation with irradiation light, in which the distance measurement section may measure the distances from a phase difference between reflected light of the irradiation light and the irradiation light. This produces an effect that the distances are measured from the phase difference between the reflected light and the irradiation light.
- the imaging section may perform processing of capturing the image data and processing of receiving the irradiation light. This produces an effect that the image data is captured and the irradiation light is received in the identical imaging section.
- the irradiation section may perform irradiation with pulsed light as the irradiation light, the pulsed light being in synchronization with a predetermined cycle signal. This produces an effect that irradiation with the pulsed light is performed.
- the irradiation section may select spotlight or diffused light according to a predetermined operation and perform irradiation with the light as the irradiation light, and the distance measurement section may measure the distances when irradiation is performed with the diffused light. This produces an effect that irradiation with the spotlight or diffused light is performed.
- the irradiation section may start irradiation with the irradiation light in a case where a predetermined button is pressed halfway down, and the imaging section may capture the image data in a case where the predetermined button is pressed all the way down. This produces an effect that irradiation with the irradiation light and imaging are performed according to an operation of the button.
- the electronic apparatus may be a camera unit that is attached to a wearable terminal. This produces an effect that the three-dimensional coordinates on the surface of the object are converted into the plane coordinates on the reference plane in a camera unit.
- the shape estimation section may estimate any one of a plurality of candidate shapes as the shape of the object on the basis of the distances. This produces an effect that any one of the plurality of candidate shapes is estimated as the shape of the object.
- the shape estimation section may include a coordinate detection section that detects coordinates of the plurality of measurement points as measured coordinates on the basis of the distances, a function acquisition section that acquires, for each of the plurality of candidate shapes, a function representing a relationship between coordinates of a candidate shape and coordinates of a predetermined reference coordinate system by using the measured coordinates, an error computation section that computes an error at a time when the shape of the object is assumed for each of the plurality of candidate shapes on the basis of the acquired function and the measured coordinates, and an estimation processing section that estimates a shape having the smallest error in the plurality of candidate shapes as the shape of the object. This produces an effect that the shape having the smallest error in the plurality of candidate shapes is estimated as the shape of the object.
- the image data may include a plurality of pixel data
- the imaging section may include phase difference detection pixels that detect a phase difference between two pupil-split images, and normal pixels that perform photoelectric conversion on light and generate any of the plurality of pixel data
- the distance measurement section may measure the distances on the basis of the phase difference detected by the phase difference detection pixels. This produces an effect that the distances are measured on the basis of the phase difference detected by the phase difference detection pixels.
- FIG. 1 is a block diagram showing a configuration example of an electronic apparatus in a first embodiment of the present technology.
- FIG. 2 is a block diagram showing a configuration example of a three-dimensional shape estimation section in the first embodiment of the present technology.
- FIG. 3 is a diagram showing an example of distance information in the first embodiment of the present technology.
- FIG. 4 is a diagram showing an example of measured coordinates in the first embodiment of the present technology.
- FIG. 5 is a diagram showing an example of parameters in the first embodiment of the present technology.
- FIG. 6 is a block diagram showing a configuration example of an imaging device in the first embodiment of the present technology.
- FIG. 7 is a block diagram showing a configuration example of a pixel circuit in the first embodiment of the present technology.
- FIG. 8 is a block diagram showing a configuration example of a coordinate conversion section in the first embodiment of the present technology.
- FIG. 9 is a diagram showing an example of the shape of laser light in the first embodiment of the present technology.
- FIG. 10 is a timing chart showing an example of exposure control of a pixel circuit within a Q 1 Q 2 detection period in the first embodiment of the present technology.
- FIG. 11 is a timing chart showing an example of the exposure control of the pixel circuit within a Q 3 Q 4 detection period in the first embodiment of the present technology.
- FIG. 12 is a timing chart showing an example of the exposure control of the pixel circuit within an imaging period in the first embodiment of the present technology.
- FIG. 13 is a diagram showing an example of a relationship between a reference coordinate system and a planar coordinate system in the first embodiment of the present technology.
- FIG. 14 is a diagram showing an example of a relationship between the reference coordinate system and a columnar coordinate system in the first embodiment of the present technology.
- FIG. 15 is a diagram showing an example of a usage scene of the electronic apparatus when an inclined plane is imaged in the first embodiment of the present technology.
- FIG. 16 is a diagram showing an example of a planar shape, images before and after conversion, and a result of translation in the first embodiment of the present technology.
- FIG. 17 is a diagram showing an example of a usage scene of the electronic apparatus when a column is imaged in the first embodiment of the present technology.
- FIG. 18 is a diagram showing an example of a columnar shape, images before and after conversion, and a result of translation in the first embodiment of the present technology.
- FIG. 19 is a flowchart showing an example of the operation of the electronic apparatus in the first embodiment of the present technology.
- FIG. 20 is a block diagram showing a configuration example of an imaging system in a modified example of the first embodiment of the present technology.
- FIG. 21 is an example of an outer appearance view of the imaging system in the modified example of the first embodiment of the present technology.
- FIG. 22 is a block diagram showing a configuration example of an electronic apparatus in a second embodiment of the present technology.
- FIG. 23 is a block diagram showing a configuration example of an imaging device in the second embodiment of the present technology.
- FIG. 24 is a flowchart showing an example of the operation of the electronic apparatus in the second embodiment of the present technology.
- Second Embodiment (example of estimating shape on the basis of results of distance measurement using phase difference detection pixels and performing coordinate conversion)
- FIG. 1 is a block diagram showing a configuration example of an electronic apparatus 100 in a first embodiment.
- the electronic apparatus 100 includes an operation section 111 , a control section 112 , a laser light irradiation section 113 , an insertion and removal section 114 , a diffuser plate 115 , a three-dimensional shape estimation section 200 , a distance measurement section 116 , a switch 117 , and an imaging device 300 . Further, the electronic apparatus 100 includes an imaging lens 118 , a coordinate conversion section 400 , an optical character recognition section 119 , a translation processing section 120 , and a sound output section 121 .
- the operation section 111 generates an operation signal according to a user operation on a button or a switch.
- the electronic apparatus 100 is provided with, for example, a button capable of being pressed down in two stages.
- the operation section 111 generates an operation signal indicating any one of a state where that button is not pressed down, a state where that button is pressed halfway down, and a state where that button is pressed all the way down, and supplies the operation signal to the control section 112 .
- the laser light irradiation section 113 performs irradiation with visible light (red light and the like) having directivity, as laser light, in a predetermined direction under the control of the control section 112 .
- the laser light is applied in, for example, a direction substantially parallel to an optical axis direction of the imaging lens 118 . It should be noted that the laser light irradiation section 113 is an example of an irradiation section described in the Claims.
- the insertion and removal section 114 performs processing of inserting the diffuser plate 115 into an optical path of the laser light or processing of removing the diffuser plate 115 from the optical path under the control of the control section 112 .
- the insertion and removal section 114 is achieved by an actuator such as a motor.
- the diffuser plate 115 diffuses the laser light. Before insertion of the diffuser plate 115 , the shape of the laser light is spot-like. Meanwhile, after insertion of the diffuser plate 115 , the laser light is diffused and becomes circular, for example. It should be noted that the shape of the diffused laser light is not limited to be circular and may be linear or triangular.
- switching of the shape of the laser light can be achieved by using, for example, one similar to a laser pointer LP-RD312BKN manufactured by SANWA SUPPLY INC. or a laser pointer ELP-G20 manufactured by KOKUYO Co., Ltd., as the laser light irradiation section 113 .
- the imaging lens 118 condenses the light and guides the light to the imaging device 300 .
- the imaging device 300 images an object and generates image data under the control of the control section 112 .
- a plurality of pixels arrayed in a two-dimensional lattice manner are disposed. Each of the pixels generates pixel data having a level corresponding to the amount of light received.
- the imaging device 300 supplies image data including those pieces of pixel data to the switch 117 . It should be noted that the imaging lens 118 and the imaging device 300 are an example of an imaging section described in the Claims.
- the switch 117 switches an output destination of the image data from the imaging device 300 under the control of the control section 112 .
- the switch 117 outputs the image data to any of the distance measurement section 116 and the coordinate conversion section 400 .
- the distance measurement section 116 measures distances from the imaging device 300 to a plurality of measurement points on the object under the control of the control section 112 .
- the measurement point is a point irradiated with the diffused laser light.
- the distance measurement section 116 measures a distance on the basis of a phase difference between the applied laser light and reflected light to that laser light. This distance measurement method is called a ToF (Time of Flight) method.
- the distance measurement section 116 supplies distance information to the three-dimensional shape estimation section 200 , the distance information indicating a distance of each of the measurement points.
- the three-dimensional shape estimation section 200 estimates the shape of the object on the basis of the distance information.
- the three-dimensional shape estimation section 200 supplies parameters associated with the estimated shape of the object to the coordinate conversion section 400 . Details of the parameters will be described later. It should be noted that the three-dimensional shape estimation section 200 is an example of a shape estimation section described in the Claims.
- the coordinate conversion section 400 performs predetermined coordinate conversion on the image data by using the parameters from the three-dimensional shape estimation section 200 .
- the coordinate conversion is processing of converting coordinates on a surface of the object irradiated with the laser light into coordinates on a predetermined reference plane. For example, a plane parallel to (i.e., facing) an image plane of the imaging device 300 is used as a reference plane.
- the coordinate conversion section 400 supplies the image data, which has been subjected to the coordinate conversion, to the optical character recognition section 119 .
- the control section 112 controls the entire electronic apparatus 100 .
- the control section 112 controls the laser light irradiation section 113 to start consecutive irradiation with the laser light. Further, the control section 112 causes the insertion and removal section 114 to remove the diffuser plate 115 .
- the control section 112 controls the laser light irradiation section 113 to start intermittent irradiation with the laser light.
- the control section 112 supplies a light emission control signal CLKp of a square wave or a sine wave to the laser light irradiation section 113 , and the laser light irradiation section 113 emits light in synchronization with that signal.
- pulsed light with the frequency of, for example, 20 megahertz (MHz) is applied.
- control section 112 when the button is pressed all the way down, the control section 112 causes the insertion and removal section 114 to insert the diffuser plate 115 . Furthermore, the control section 112 controls the imaging device 300 to capture image data. The control section 112 then controls the switch 117 to supply the image data to the distance measurement section 116 and causes the distance measurement section 116 to measure a distance over a fixed distance measurement period.
- control section 112 controls the imaging device 300 to capture image data again. Further, the control section 112 controls the switch 117 to supply the image data to the coordinate conversion section 400 .
- the optical character recognition section 119 performs OCR on the image data.
- the optical character recognition section 119 supplies a result of the OCR to the translation processing section 120 .
- the translation processing section 120 performs translation processing of replacing a language (Japanese etc.) of a text including characters recognized by the OCR with another predetermined language (English etc.).
- the translation processing section 120 supplies a result of the translation processing to the sound output section 121 .
- the sound output section 121 outputs the result of the translation of the translation processing section 120 with use of sound.
- the sections such as the laser light irradiation section 113 and the imaging device 300 are provided within one apparatus (electronic apparatus 100 ), those sections may be dispersedly provided in a plurality of apparatuses.
- the laser light irradiation section 113 , the insertion and removal section 114 , and the diffuser plate 115 may be provided within an external laser pointer unit that is externally provided to the electronic apparatus 100 .
- the optical character recognition section 119 and the translation processing section 120 may be provided to an information processing apparatus outside the electronic apparatus 100 . In this case, the electronic apparatus 100 only needs to send image data and character information to that information processing apparatus and receive a result of OCR or translation.
- the electronic apparatus 100 may perform processing other than the OCR on the image data.
- the electronic apparatus 100 may perform processing of recognizing a particular object (face etc.) or a two-dimensional bar code, instead of the OCR.
- the electronic apparatus 100 outputs a result of the translation with use of sound, the result of the translation may be displayed on a display section such as a liquid crystal monitor.
- FIG. 2 is a block diagram showing a configuration example of the three-dimensional shape estimation section 200 in the first embodiment.
- the three-dimensional shape estimation section 200 includes a measured coordinate retaining section 210 , a measured coordinate detection section 220 , a distance information retaining section 230 , a least-squares method computation section 240 , a parameter retaining section 250 , an error computation section 260 , and a parameter supply section 270 .
- the distance information retaining section 230 retains the distance information indicating a distance of each of the measurement points, which is measured by the distance measurement section 116 .
- the measured coordinate detection section 220 detects three-dimensional coordinates in a predetermined reference coordinate system for each of the measurement points, as measured coordinates, on the basis of the distance information.
- a coordinate system including an X 0 axis and a Y 0 axis that are parallel to the image plane of the imaging device 300 and a Z 0 axis perpendicular to the image plane is used.
- the measured coordinate detection section 220 acquires in advance an angle defined by the direction of the diffused laser light and the predetermined reference axis (such as X 0 , Y 0 , and Z 0 axes) and, using a trigonometric function based on the angle and the measured distance, computes the measured coordinates for each of the measurement points.
- the measured coordinate detection section 220 causes the measured coordinate retaining section 210 to retain the measured coordinates for each of the measurement points.
- the number of measurement points is represented by N (N is an integer of 2 or more), and measured coordinates of the i-th (i is an integer from 0 to N ⁇ 1) measurement point are represented by (x 0i , y 0i , z 0i ).
- the least-squares method computation section 240 computes a function that most fits a measured coordinate group for each of candidate shapes by using the least-squares method.
- the plurality of candidate shapes include, for example, a planar shape inclined to the reference plane and a columnar shape.
- the function obtained by the computation is a function indicating a relationship between coordinates in the coordinate system of the candidate shape and coordinates in the reference coordinate system.
- a coordinate system including an Xp axis, a Yp axis, and a Zp axis orthogonal to one another is a planar coordinate system on an inclined plane
- a relationship between coordinates in the planar coordinate system and coordinates in the reference coordinate system is expressed by, for example, the following expression.
- Rxp represents a rotation matrix in the rotation about the x 0 axis by an angle tr xp .
- Ryp represents a rotation matrix in the rotation about the y 0 axis by an angle tr yp
- Rzp is a rotation matrix in the rotation about the z 0 axis by an angle tr zp .
- Tp represents a translational vector including an x 0 component, a y 0 component, and a z 0 component.
- the values of those x 0 component, y 0 component, and z 0 component are represented by A, B, and C.
- a coordinate system including an Xc axis, a Yc axis, and a Zc axis orthogonal to one another is a columnar coordinate system as a column
- a relationship between coordinates in the columnar coordinate system and the coordinates in the reference coordinate system is expressed by, for example, the following expression.
- (x ci , y ci , z ci ) represent the coordinates in the columnar coordinate system.
- Rxc represents a rotation matrix in the rotation about the x 0 axis by an angle tr xc .
- Ryc represents a rotation matrix in the rotation about the y 0 axis by an angle tr yc
- Rzc is a rotation matrix in the rotation about the z 0 axis by an angle tr zc .
- r is a radius of the column.
- Tc represents a translational vector including an x 0 component, a y 0 component, and a z 0 component.
- the values of those x 0 component, y 0 component, and z 0 component are represented by D, E, and F.
- a partial differentiation method of partially differentiating both the sides of the above expression with tr xc , tr yc , tr zc , D, E, and F.
- the least-squares method computation section 240 causes the parameter retaining section 250 to retain the obtained parameter group of Rp and Tp and parameter group of Rc, Tc, and r.
- the error computation section 260 computes, by using the parameters corresponding to a candidate shape for each of the candidate shapes, the error at the time when that shape is assumed.
- the error computation section 260 computes the sum of the squared errors Ep corresponding to the inclined planar shape and the sum of the squared errors Ec corresponding to the columnar shape by using, for example, Expressions 2 and 4, and supplies the sums Ep and Ec to the parameter supply section 270 .
- the parameter supply section 270 estimates any one of the plurality of candidate shapes, as the shape of an actual object, on the basis of the errors.
- the parameter supply section 270 compares the sum of the squared errors Ep and the sum of the squared errors Ec with each other and estimates a candidate shape corresponding to one having a smaller value as the shape of the object.
- the parameter supply section 270 then reads a parameter group corresponding to that shape from the parameter retaining section 250 and supplies the parameter group to the coordinate conversion section 400 , together with identification information of the estimated shape.
- the parameter supply section 270 is an example of an estimation processing section described in the Claims.
- the three-dimensional shape estimation section 200 assumes the two candidate shapes, i.e., a planar shape and a columnar shape to estimate the shape, but the types of the candidate shapes are not limited thereto.
- the candidate shapes may be, for example, a sphere or a cube.
- the number of candidate shapes is not limited to two and may be three or more. In a case where the number of candidate shapes is three or more, the parameter supply section 270 estimates a candidate shape having the smallest errors as the shape of the object.
- FIG. 3 is a diagram showing an example of the distance information in the first embodiment.
- the distance information includes a distance from the imaging lens 118 to the measurement point for each of the measurement points. For example, in a case where a distance to a measurement point P 1 is 100 meters and a distance to a measurement point P 2 is 102 meters, the three-dimensional shape estimation section 200 retains “100” corresponding to P 1 and “102” corresponding to P 2 in the distance information retaining section 230 .
- FIG. 4 is a diagram showing an example of the measured coordinates in the first embodiment.
- the measured coordinates on the X 0 , Y 0 , and Z 0 axes are calculated for each of the measurement points on the basis of the distances to the measurement points and the angle of irradiation with laser light. It is assumed that 50, 50, and 51 are respectively calculated as an X 0 coordinate, a Y 0 coordinate, and a Z 0 coordinate at the measurement point P 1 , and 50, 51, and 51 are respectively calculated as an X 0 coordinate, a Y 0 coordinate, and a Z 0 coordinate at the measurement point P 2 .
- the three-dimensional shape estimation section 200 retains the measured coordinates (50, 50, 51) corresponding to the measurement point P 1 and the measured coordinates (50, 51, 51) corresponding to the measurement point P 2 in the measured coordinate retaining section 210 .
- FIG. 5 is a diagram showing an example of the parameters in the first embodiment.
- the function of Expression 1 corresponding to the planar shape includes the parameters of the rotation matrix Rp and the translational vector Tp.
- the function of Expression 2 corresponding to the columnar shape includes the parameters of the rotation matrix Rc, the translational vector Tc, and the radius r.
- the three-dimensional shape estimation section 200 calculates those parameters by the least-squares method and retains those parameters in the parameter retaining section 250 .
- FIG. 6 is a block diagram showing a configuration example of the imaging device 300 in the first embodiment.
- the imaging device 300 includes a row scanning circuit 310 , a pixel array section 320 , a timing control section 340 , a plurality of AD (Analog to Digital) conversion sections 350 , a column scanning circuit 360 , and a signal processing section 370 .
- the pixel array section 320 includes a plurality of pixel circuits 330 disposed in a two-dimensional lattice manner.
- an aggregation of the pixel circuits 330 arrayed in a predetermined direction is referred to as a “row”, and an aggregation of the pixel circuits 330 arrayed in a direction perpendicular to the row is referred to as a “column”.
- the AD conversion sections 350 described above are provided for each of the columns.
- the timing control section 340 controls the row scanning circuit 310 , the AD conversion sections 350 , and the column scanning circuit 360 in synchronization with a vertical synchronization signal.
- the row scanning circuit 310 causes all the rows to be exposed simultaneously, and after the end of exposure, selects the rows in sequence so as to cause the rows to output pixel signals.
- the pixel circuits 330 output pixel signals each having a level corresponding to the amount of light received, under the control of the row scanning circuit 310 .
- the AD conversion sections 350 each AD-convert the pixel signals from the column corresponding thereto.
- the AD conversion sections 350 output the AD-converted pixel signals as pixel data to the signal processing section 370 under the control of the column scanning circuit 360 .
- the column scanning circuit 360 selects the AD conversion sections 350 in sequence and causes the AD conversion sections 350 to output the pixel data.
- the signal processing section 370 performs signal processing such as CDS (Correlated Double Sampling) processing on image data including the pixel data.
- the signal processing section 370 supplies the image data after having been subjected to the signal processing to the switch 117 .
- FIG. 7 is a block diagram showing a configuration example of the pixel circuit 330 in the first embodiment.
- the pixel circuit 330 includes a light-receiving element 331 , a transfer switch 332 , charge storage sections 333 and 334 , and selector switches 335 and 336 .
- the light-receiving element 331 performs photoelectric conversion on light and generates charge.
- a photodiode is used for the light-receiving element 331 .
- the transfer switch 332 connects the light-receiving element 331 to any one of the charge storage section 333 , the charge storage section 334 , and a reset power supply Vrst under the control of the row scanning circuit 310 .
- the transfer switch 332 is achieved by, for example, a plurality of MOS (Metal-Oxide-Semiconductor) transistors.
- MOS Metal-Oxide-Semiconductor
- the charge storage sections 333 and 334 store charge and generates a voltage corresponding to the amount of stored charge.
- a floating diffusion layer is used for those charge storage sections 333 and 334 .
- the selector switch 335 opens and closes a pathway between the charge storage section 333 and the AD conversion section 350 under the control of the row scanning circuit 310 .
- the selector switch 336 opens and closes a pathway between the charge storage section 334 and the AD conversion section 350 under the control of the row scanning circuit 310 .
- the selector switch 335 is changed into the closed state
- an FD read signal RD_FD2 is supplied by the row scanning circuit 310
- the selector switch 336 is changed into the closed state.
- Each of those selector switches 335 and 336 is achieved by, for example, the MOS transistor.
- FIG. 8 is a block diagram showing a configuration example of the coordinate conversion section 400 in the first embodiment.
- the coordinate conversion section 400 includes a cutout processing section 420 , a frame memory 410 , and an address conversion section 430 .
- the cutout processing section 420 cuts out a region having a predetermined shape (e.g., rectangle) including a part surrounded by the circular laser light in the image data.
- the cutout processing section 420 causes the frame memory 410 to retain the region cut out as a cutout region.
- the frame memory 410 retains the cutout region.
- the address conversion section 430 converts, for each of the pixels within the cutout region, the coordinates thereof into coordinates in the reference plane.
- the coordinate conversion section 400 receives identification information of the shape estimated by the three-dimensional shape estimation section 200 , and the parameter group of the shape.
- the rotation matrix Rp and the translational vector Tp of Expression 1 are supplied as parameters to the coordinate conversion section 400 .
- the rotation matrix Rc, the translational vector Tc, and the radius r of Expression 2 are supplied as parameters to the coordinate conversion section 400 .
- a positional relationship between coordinates (u i , v i ) on the imaging device 300 and coordinates (x ei , y ei ) on the inclined plane is expressed by the following expression using the parameters Rp and Tp.
- f represents a focal length.
- the address conversion section 430 outputs a pixel value of the coordinates (u i , v i ) in the image (cutout region) retained in the frame memory 410 by using the above expression, as a pixel value of the coordinates (x ei , y ei ) on the reference plane. As a result, the address conversion section 430 can generate an image facing the image plane.
- a positional relationship between coordinates (u i , v i ) on the imaging device 300 and coordinates (x ei , y ei ) on the column is expressed by the following expression using the parameters Rp, Tp, and r.
- the address conversion section 430 outputs a pixel value of the coordinates (u i , v i ) in the image retained in the frame memory 410 by using the above expression, as a pixel value of the coordinates (x ei , y ei ) on the reference plane. As a result, the address conversion section 430 can generate an image facing the image plane.
- the coordinate conversion section 400 cuts out only the periphery of the part surrounded by the laser light and performs coordinate conversion thereon, but may perform coordinate conversion on the entire image without performing cutout.
- FIG. 9 is a diagram showing an example of the shape of the laser light in the first embodiment.
- Part a of the figure is a diagram showing an example of the shape of the laser light when the button is pressed halfway down.
- spot-like laser light 500 is applied with the diffuser plate 115 being removed.
- part b of FIG. 9 is a diagram showing an example of the shape of the laser light when the button is pressed all the way down.
- the diffuser plate 115 is inserted and circular laser light 501 is applied.
- a user adjusts a position irradiated with the laser light such that characters to be subjected to OCR (characters etc. on a price tag) fall within the circle of the laser light.
- the ToF method it is general to perform irradiation with light that is not visible light, such as infrared light.
- light that is not visible light, such as infrared light.
- the electronic apparatus 100 performs irradiation with visible light such as red light.
- visible light such as red light.
- the principle of the distance measurement by the ToF is similar to that in the case of infrared light.
- FIG. 10 is a timing chart showing an example of exposure control of the pixel circuit within a Q 1 Q 2 detection period in the first embodiment.
- the pixel circuit 330 alternately repeats detection of the amounts of light received Q 1 and Q 2 and detection of the amounts of light received Q 3 and Q 4 .
- a detection period of the amounts of light received Q 1 and Q 2 is referred to as a “Q 1 Q 2 detection period”
- a detection period of the amounts of light received Q 3 and Q 4 is referred to as a “Q 3 Q 4 detection period”.
- the length of each of the Q 1 Q 2 detection period and the Q 3 Q 4 detection period is a cycle of a vertical synchronization signal VSYNC (e.g., 1/60 sec).
- the amount of light received Q 1 is accumulation of the amounts of light received q 1 from 0 degrees to 180 degrees over the Q 1 Q 2 detection period, when a particular phase (e.g., rising) of a light emission control signal CLKp of the laser light is set to 0 degrees.
- the frequency of the light emission control signal CLKp is as high as 20 megahertz (MHz), and thus the amount of light received q 1 per cycle ( 1/20 microsec) is very small and difficult to detect.
- the pixel circuit 330 accumulates each q 1 over the Q 1 Q 2 detection period such as 1/60 sec, which is longer than the cycle of the light emission control signal CLKp ( 1/20 microsec), and detects the total amounts thereof as the amount of light received Q 1 .
- the amount of light received Q 2 is accumulation of the amounts of reflected light received q 2 from 180 degrees to 360 degrees over the Q 1 Q 2 detection period.
- the amount of light received Q 3 is accumulation of the amounts of reflected light received q 3 from 90 degrees to 270 degrees over the Q 3 Q 4 detection period.
- the amount of light received Q 4 is accumulation of the amounts of reflected light received q 4 from 270 degrees to 90 degrees over the Q 3 Q 4 detection period.
- d represents a distance, and its unit is meter (m), for example.
- c represents a light speed, and its unit is meter per second (m/s), for example.
- tan ⁇ 1 ( ) represents an inverse function of a tangent function.
- the row scanning circuit 310 supplies a reset signal RST to all the rows over a predetermined pulse period from the timing T 1 .
- the reset signal RST the amounts of charge stored in the charge storage sections 333 and 334 in all the rows are initialized.
- the row scanning circuit 310 initializes the charge of the light-receiving elements 331 in all the rows with use of an FD selection signal SEL_FD.
- the row scanning circuit 310 then causes charge generated by the light-receiving element 331 with use of the FD selection signal SEL_FD to be transferred to the charge storage section 333 for all the rows from 0 degrees to 180 degrees within the cycle of the light emission control signal CLKp in the Q 1 Q 2 detection period. With this control, the amount of light received q 1 is stored in the charge storage sections 333 .
- the row scanning circuit 310 causes charge generated by the light-receiving element 331 with use of the FD selection signal SEL_FD to be transferred to the charge storage section 334 for all the rows from 180 degrees to 360 degrees within the cycle of the light emission control signal CLKp in the Q 1 Q 2 detection period. With this control, the amount of light received q 2 is stored in the charge storage section 334 .
- the row scanning circuit 310 supplies in sequence the FD read signals RD_FD 1 and RD_FD 2 to the first row. With this control, a pixel signal corresponding to the amounts of light received Q 1 and Q 2 of the first row is read. Next, the row scanning circuit 310 supplies in sequence the FD read signals RD_FD 1 and RD_FD 2 to the second row and reads a pixel signal. Hereinafter, similarly, the row scanning circuit 310 selects the rows in sequence and reads pixel signals.
- each of the pixel circuits 330 detects the amount of light received Q 1 from 0 degrees to 180 degrees and the amount of light received Q 2 from 180 degrees to 360 degrees.
- FIG. 11 is a timing chart showing an example of exposure control of the pixel circuits 330 within the Q 3 Q 4 detection period in the first embodiment.
- the row scanning circuit 310 supplies the reset signal RST to all the rows over a predetermined pulse period from the timing T 2 and initializes the amounts of charge stored in the charge storage sections 333 and 334 in all the rows. Further, the row scanning circuit 310 initializes the charge of the light-receiving elements 331 in all the rows with use of the FD selection signal SEL_FD.
- the row scanning circuit 310 then causes charge generated by the light-receiving element 331 with use of the FD selection signal SEL_FD to be transferred to the charge storage section 334 for all the rows from the initial 0 degrees to 90 degrees. With this control, the amount of light received q 4 is stored in the charge storage section 334 .
- the row scanning circuit 310 causes charge generated by the light-receiving element 331 with use of the FD selection signal SEL_FD to be transferred to the charge storage section 333 for all the rows from 90 degrees to 270 degrees within the cycle of the light emission control signal CLKp. With this control, the amount of light received q 3 is stored in the charge storage section 333 .
- the row scanning circuit 310 causes charge generated by the light-receiving element 331 with use of the FD selection signal SEL_FD to be transferred to the charge storage section 334 for all the rows from 270 degrees to 90 degrees within the cycle of the light emission control signal CLKp in the Q 3 Q 4 detection period. With this control, the amount of light received q 4 is stored in the charge storage section 334 .
- the row scanning circuit 310 supplies in sequence the FD read signals RD_FD 1 and RD_FD 2 to the first row. With this control, a pixel signal corresponding to the amounts of light received Q 3 and Q 4 of the first row is read. Hereinafter, similarly, the row scanning circuit 310 selects the rows in sequence and reads pixel signals.
- each of the pixel circuits 330 detects the amount of light received Q 3 from 90 degrees to 270 degrees and the amount of light received Q 4 from 270 degrees to 90 degrees.
- FIG. 12 is a timing chart showing an example of exposure control of the pixel circuits 330 within an imaging period in the first embodiment.
- the row scanning circuit 310 supplies the reset signal RST to all the rows over a predetermined pulse period from the timing T 3 and initializes the amount of charge stored in all the rows. Further, the row scanning circuit 310 initializes the charge of the light-receiving elements 331 in all the rows with use of the FD selection signal SEL_FD.
- the row scanning circuit 310 then causes charge generated by the light-receiving element 331 with use of the FD selection signal SEL_FD to be transferred to the charge storage section 333 for all the rows. Subsequently, at a timing T 31 immediately before a timing T 4 , the row scanning circuit 310 supplies the FD read signal RD_FD 1 to the first row and reads a pixel signal. Next, the row scanning circuit 310 supplies the FD read signal RD_FD 1 to the second row and reads a pixel signal. Hereinafter, similarly, the row scanning circuit 310 selects the rows in sequence and reads pixel signals. As a result, image data is captured.
- FIG. 13 is a diagram showing an example of the relationship between the reference coordinate system and the planar coordinate system in the first embodiment.
- the reference coordinate system including the X 0 axis, the Y 0 axis, and the Z 0 axis is rotated and shifted in parallel by Rp and Tp, and the planar coordinate system including the Xp axis, the Yp axis, and the Zp axis is obtained.
- the electronic apparatus 100 calculates the sum of the squared errors Ep, assuming that N measurement points such as measurement points 502 and 503 are positioned on an Xp-Yp plane 510 .
- FIG. 14 is a diagram showing an example of the relationship between the reference coordinate system and the columnar coordinate system in the first embodiment.
- the reference coordinate system including the X 0 axis, the Y 0 axis, and the Z 0 axis is rotated and shifted in parallel by Rc and Tc, and the columnar coordinate system including the Xc axis, the Yc axis, and the Zc axis is obtained.
- the electronic apparatus 100 calculates the error Ec, assuming that N measurement points such as measurement points 502 and 503 are positioned on the surface of a column 511 .
- FIG. 15 is a diagram showing an example of a usage scene of the electronic apparatus 100 when an inclined plane is imaged in the first embodiment.
- Part a of the figure is a diagram showing an example of the shape of the laser light when the button is pressed halfway down.
- the spot-like laser light 500 is applied.
- the electronic apparatus 100 is, for example, a stick-like apparatus.
- a user grasps the electronic apparatus 100 with the hand and can change the direction of the laser light. The user moves a point irradiated with the laser light 500 and presses the button all the way down at a position of characters to be subjected to OCR.
- a user moves the point irradiated with the laser light 500 to that price tag.
- the plane of the price tag corresponds to the plane 510 of FIG. 13 .
- Part b of FIG. 15 is a diagram showing an example of the shape of the laser light when the button is pressed all the way down. As exemplified in part b of the figure, when the button is pressed all the way down, the circular laser light 501 is applied to the price tag (plane 510 ).
- FIG. 16 is a diagram showing an example of a planar shape, images before and after conversion, and a result of translation in the first embodiment.
- Part a of the figure is a diagram showing an example of a candidate planar shape.
- the color gradation represents a depth.
- a plane inclined to the image plane of the imaging device 300 is assumed.
- Part b of FIG. 16 shows an example of a part irradiated with the laser light in the captured image data.
- character strings of “Salmon” and “100 yen” are described in the price tag, characters are distorted because the plane of the price tag does not face the image plane. Because of this, there is a risk that the characters cannot be accurately read by OCR at this rate.
- the electronic apparatus 100 converts the coordinates on the inclined plane into the coordinates on the reference plane.
- Part c of FIG. 16 is an example of an image after the coordinate conversion. As exemplified in part c of the figure, the plane that does not face the image plane is converted into the reference plane facing thereto, and the distortion of the characters disappears.
- the electronic apparatus 100 reads the characters in the converted image by OCR and performs translation.
- Part d of FIG. 16 is a diagram showing an example of a result of the translation. “Salmon” and “100 yen” in Japanese are translated into “Salmon” and “One hundred yen” in English and are output by sound.
- FIG. 17 is a diagram showing an example of a usage scene of the electronic apparatus 100 when an object with a columnar shape is imaged in the first embodiment.
- Part a in the figure is a diagram showing an example of the shape of the laser light when the button is pressed halfway down.
- a user moves a point irradiated with the laser light 500 to that label.
- This columnar part of the wine bottle corresponds to the column 511 of FIG. 14 .
- Part b of FIG. 17 is a diagram showing an example of the shape of the laser light when the button is pressed all the way down. As exemplified in part b of the figure, when the button is pressed all the way down, the circular laser light 501 is applied to the label (column 511 ).
- FIG. 18 is a diagram showing an example of a columnar shape, images before and after conversion, and a result of translation in the first embodiment. Parts a and b of the figure are each a diagram showing an example of a candidate columnar shape.
- Part b of FIG. 18 shows an example of a part irradiated with the laser light in the captured image data.
- the electronic apparatus 100 converts the coordinates on the column into the coordinates on the reference plane.
- Part c of FIG. 18 is an example of an image after the coordinate conversion.
- the curved surface of the column is converted into the reference plane, and the distortion of the characters disappears.
- the electronic apparatus 100 reads the characters in the converted image by OCR and performs translation.
- Part d of FIG. 18 is a diagram showing an example of a result of the translation. “Antioxidant free nice wine” in Japanese is translated into “Antioxidant free nice wine” in English and output by sound.
- FIG. 19 is a flowchart showing an example of the operation of the electronic apparatus 100 in the first embodiment. This operation starts, for example, when the electronic apparatus 100 is powered on.
- the electronic apparatus 100 determines whether the button is pressed halfway down by the user (Step S 901 ).
- the electronic apparatus 100 removes the diffuser plate (Step S 902 ) and performs irradiation with spot-like laser light (Step S 903 ).
- the electronic apparatus 100 determines whether the button is pressed all the way down by the user (Step S 904 ).
- Step S 901 When the button is not pressed halfway down (Step S 901 : No) or when the button is not pressed all the way down (Step S 904 : No), the electronic apparatus 100 repeats Step S 901 and the subsequent steps.
- Step S 904 When the button is pressed all the way down (Step S 904 : Yes), the electronic apparatus 100 inserts the diffuser plate (Step S 905 ) and performs irradiation with circular laser light and reception of its reflected light (Step S 906 ). The electronic apparatus 100 performs distance measurement for each of the measurement points by the ToF method (Step S 907 ) and estimates a three-dimensional shape of the object on the basis of the measured distances (Step S 908 ).
- the electronic apparatus 100 then images the object (Step S 909 ) and performs coordinate conversion on the image data (Step S 910 ).
- the electronic apparatus 100 performs optical character recognition on the image after subjected to the coordinate conversion (Step S 911 ) and performs translation and sound output (Step S 912 ). After Step S 912 , the electronic apparatus 100 terminates the operation for OCR.
- the electronic apparatus 100 estimates the shape of the object on the basis of the distances to the measurement points and performs coordinate conversion on the basis of that shape.
- the electronic apparatus 100 estimates the shape of the object on the basis of the distances to the measurement points and performs coordinate conversion on the basis of that shape.
- the stick-like electronic apparatus 100 includes the operation section 111 , the imaging device 300 , and the like.
- the operation section 111 and the like may be provided to a camera unit mounted to a wearable terminal.
- An imaging system in this modified example of the first embodiment is different from the first embodiment in that the operation section 111 and the like are provided to a camera unit mounted to a wearable terminal.
- FIG. 20 is a block diagram showing a configuration example of an imaging system.
- the imaging system includes a wearable terminal 150 and a camera unit 101 .
- the wearable terminal 150 includes an operation section 151 and a terminal control section 152 .
- a configuration of the camera unit 101 is similar to that of the electronic apparatus 100 of the first embodiment except that the camera unit 101 includes a camera unit control section 122 and a switch 123 instead of the control section 112 and the switch 117 .
- the camera unit 101 is an example of an electronic apparatus described in the Claims.
- the operation section 151 generates an operation signal according to a user operation on a switch or a button. For example, an operation to capture image data (e.g., to press a shutter button down) is performed.
- the operation section 151 supplies the generated operation signal to the terminal control section 152 .
- the terminal control section 152 controls the entire wearable terminal 150 .
- the terminal control section 152 supplies the operation signal to the camera unit control section 122 and receives image data and the like from the camera unit control section 122 .
- the camera unit control section 122 controls the entire camera unit 101 . In a case where the camera unit 101 is not mounted to the wearable terminal 150 , the camera unit control section 122 performs control similar to that in the first embodiment. Meanwhile, in a case where the camera unit 101 is mounted to the wearable terminal 150 , the camera unit control section 122 determines whether the operation for capturing image data (e.g., for pressing a shutter button down) is performed. When the shutter button is pressed down, the camera unit control section 122 controls the switch 123 to output image data to the wearable terminal 150 .
- the operation for capturing image data e.g., for pressing a shutter button down
- the switch 123 switches an output destination of the image data under the control of the camera unit control section 122 .
- FIG. 21 is an example of an outer appearance view of the imaging system in the modified example of the first embodiment.
- the wearable terminal 150 is an eyeglasses-type terminal.
- the camera unit 101 can be attached to the side surface of the wearable terminal 150 via a coupling tool 155 .
- the lens portions of this terminal are omitted in the figure.
- the coupling tool 155 for example, one described in FIG. 1 or 2 of Japanese Unexamined Patent Application Publication No. 2015-515638 can be used.
- the camera unit 101 operates also when detached from the wearable terminal 150 .
- the function of the single camera unit 101 is similar to that of the electronic apparatus 100 of the first embodiment.
- the wearable terminal 150 is provided with a blocking member 156 .
- the blocking member 156 blocks laser light from the camera unit 101 when the camera unit 101 is attached. It should be noted that incident light to the imaging device 300 is not blocked.
- the camera unit 101 does not perform distance measurement but captures image data or the like when mounted to the wearable terminal 150 .
- the wearable terminal 150 analyzes that image data and displays predetermined information on a head-up display of the terminal. For example, current position information, an image to be synthesized with a recognized object, and the like are displayed.
- the blocking member 156 blocks the laser light when the camera unit 101 is mounted to the wearable terminal 150 .
- the laser light it is possible to prevent the laser light from appearing in the captured image data.
- the ToF method needs components to perform irradiation with laser light, and this makes it difficult to reduce costs and size accordingly.
- An electronic apparatus 100 of this second embodiment is different from that of the first embodiment in that reduction in size and costs is achieved.
- FIG. 22 is a block diagram showing a configuration example of the electronic apparatus 100 in the second embodiment.
- a digital camera such as a digital single-lens reflex camera is assumed.
- the electronic apparatus 100 of the second embodiment is different from that of the first embodiment in that a display section 123 is further provided instead of the laser light irradiation section 113 .
- phase difference detection pixels that detect a phase difference of a pair of images pupil-split, and normal pixels are disposed.
- a control section 112 controls the imaging device 300 to capture an image with a relatively low resolution as a live-view image in synchronization with a vertical synchronization signal.
- the control section 112 then controls a switch 117 to supply the live-view image to the display section 123 .
- control section 112 causes the imaging device 300 to capture image data with a higher resolution than that of the live-view image.
- the control section 112 then controls the switch 117 to supply the image data to a distance measurement section 116 .
- the display section 123 displays the image data.
- the display section 123 is achieved by, for example, a liquid crystal monitor.
- the distance measurement section 116 measures a distance on the basis of the pixel signals of the phase difference pixels and supplies the image data and distance information to a three-dimensional shape estimation section 200 .
- image data pixel values of positions of the phase difference pixels are interpolated from surrounding pixels. Processing of a coordinate conversion section 400 and the others are similar to those of the first embodiment.
- the electronic apparatus 100 uses the distance measurement information in order to estimate a three-dimensional shape, but the distance measurement information is also used for AF (Auto Focus).
- AF Auto Focus
- FIG. 22 an AF control section that controls a position of a focus lens on the basis of the distance measurement information is omitted.
- FIG. 23 is a block diagram showing a configuration example of the imaging device 300 in the second embodiment.
- the imaging device 300 of the second embodiment is different from that of the first embodiment in that normal pixel circuits 380 and phase difference detection pixel circuits 390 are provided instead of the pixel circuits 330 .
- the normal pixel circuits 380 are pixels that perform photoelectric conversion on visible light such as R (Red), G (Green), and B (Blue) and generate pixel signals.
- the phase difference detection pixel circuits 390 are pixels for detecting a phase difference of a pair of images pupil-split. In such a manner, a method in which the pixels for phase difference detection are disposed on the image plane, and a distance is measured on the basis of the signals of those pixels is referred to as an image-plane phase difference method.
- the structure of the imaging device 300 in such an image-plane phase difference method is described in, for example, Japanese Patent Application Laid-open No. 2012-124791.
- FIG. 24 is a flowchart showing an example of the operation of the electronic apparatus 100 in the second embodiment.
- the electronic apparatus 100 determines whether the button is pressed halfway down by the user (Step S 901 ). When the button is pressed halfway down (Step S 901 : Yes), the electronic apparatus 100 captures and displays a live-view image (Step S 921 ). The user adjusts the orientation of the electronic apparatus 100 while viewing the live-view image such that characters to be subjected to OCR fall within the monitor (display section 123 ). After the adjustment, the user presses the button all the way down.
- the electronic apparatus 100 determines whether the button is pressed all the way down by the user (Step S 904 ). When the button is pressed all the way down, the electronic apparatus 100 captures image data and also obtains a distance for each of the measurement points on the basis of the phase difference detected in the phase difference detection pixel circuits 390 (Step S 922 ). Further, the electronic apparatus 100 estimates a three-dimensional shape (Step S 908 ) and interpolates the phase difference pixels in the image data (Step S 923 ). The electronic apparatus 100 then executes Step S 910 and subsequent steps.
- the second embodiment of the present technology since a distance is measured on the basis of the phase difference detected by the phase difference detection pixels, it is unnecessary to provide the laser light irradiation section 113 or the diffuser plate 115 and it is possible to reduce the number or components. This facilitates reduction in size or costs.
- processing steps described in the above embodiments may be understood as a method including a series of those steps.
- the processing steps described in the above embodiments may be understood as a program for causing a computer to execute the series of those steps or as a recording medium storing that program.
- a recording medium for example, a CD (Compact Disc), an MD (Mini Disc), a DVD (Digital Versatile Disc), a memory card, a Blu-ray® Disc, and the like can be used.
- An electronic apparatus including:
- an imaging section that images an object and captures image data
- a distance measurement section that measures distances from the imaging section to a plurality of measurement points on a surface of the object
- a shape estimation section that estimates a shape of the object from the measured distances
- a coordinate conversion section that performs coordinate conversion on the image data, the coordinate conversion including converting three-dimensional coordinates on the surface of the object into plane coordinates on a predetermined reference plane on the basis of the estimated shape.
- a character recognition section that recognizes a character on the surface of the object in the image data that has been subjected to the coordinate conversion.
- the distance measurement section measures the distances from a phase difference between reflected light of the irradiation light and the irradiation light.
- the imaging section performs processing of capturing the image data and processing of receiving the irradiation light.
- the irradiation section performs irradiation with pulsed light as the irradiation light, the pulsed light being in synchronization with a predetermined cycle signal.
- the irradiation section selects spotlight or diffused light according to a predetermined operation and performs irradiation with the light as the irradiation light, and
- the distance measurement section measures the distances when irradiation is performed with the diffused light.
- the irradiation section starts irradiation with the irradiation light in a case where a predetermined button is pressed halfway down, and
- the imaging section captures the image data in a case where the predetermined button is pressed all the way down.
- the electronic apparatus is a camera unit that is attached to a wearable terminal.
- the shape estimation section estimates any one of a plurality of candidate shapes as the shape of the object on the basis of the distances.
- the shape estimation section includes
- a coordinate detection section that detects coordinates of the plurality of measurement points as measured coordinates on the basis of the distances
- a function acquisition section that acquires, for each of the plurality of candidate shapes, a function representing a relationship between coordinates of a candidate shape and coordinates of a predetermined reference coordinate system by using the measured coordinates
- an error computation section that computes an error at a time when the shape of the object is assumed for each of the plurality of candidate shapes on the basis of the acquired function and the measured coordinates
- an estimation processing section that estimates a shape having the smallest error in the plurality of candidate shapes as the shape of the object.
- the image data includes a plurality of pixel data
- the imaging section includes
- phase difference detection pixels that detect a phase difference between two pupil-split images
- the distance measurement section measures the distances on the basis of the phase difference detected by the phase difference detection pixels.
- a method of controlling an electronic apparatus including:
- a coordinate conversion step of performing coordinate conversion on the image data including converting three-dimensional coordinates on the surface of the object into plane coordinates on a predetermined reference plane on the basis of the estimated shape.
- a coordinate conversion step of performing coordinate conversion on the image data including converting three-dimensional coordinates on the surface of the object into plane coordinates on a predetermined reference plane on the basis of the estimated shape.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Character Input (AREA)
- Image Processing (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
Description
Rp=Rxp·Ryp·Rzp
Rc=Rxc·Ryc·Rzc
d=(c/4πf)*tan−1{(Q3−Q4)/(Q1−Q2)}
-
- <2. Second Embodiment>
- 100 electronic apparatus
- 101 camera unit
- 111, 151 operation section
- 112 control section
- 113 laser light irradiation section
- 114 insertion and removal section
- 115 diffuser plate
- 116 distance measurement section
- 117, 123 switch
- 118 imaging lens
- 119 optical character recognition section
- 120 translation processing section
- 121 sound output section
- 122 camera unit control section
- 123 display section
- 150 wearable terminal
- 152 terminal control section
- 155 coupling tool
- 156 blocking member
- 200 three-dimensional shape estimation section
- 210 measured coordinate retaining section
- 220 measured coordinate detection section
- 230 distance information retaining section
- 240 least-squares method computation section
- 250 parameter retaining section
- 260 error computation section
- 270 parameter supply section
- 300 imaging device
- 310 row scanning circuit
- 320 pixel array section
- 330 pixel circuit
- 331 light-receiving element
- 332 transfer switch
- 333, 334 charge storage section
- 335, 336 selector switch
- 340 timing control section
- 350 AD conversion section
- 360 column scanning circuit
- 370 signal processing section
- 380 normal pixel circuit
- 390 phase difference detection pixels circuit
- 400 coordinate conversion section
- 410 frame memory
- 420 cutout processing section
- 430 address conversion section
Claims (12)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2016102584 | 2016-05-23 | ||
| JP2016-102584 | 2016-05-23 | ||
| PCT/JP2017/007827 WO2017203777A1 (en) | 2016-05-23 | 2017-02-28 | Electronic device, control method for electronic device, and program |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20180189591A1 US20180189591A1 (en) | 2018-07-05 |
| US10565462B2 true US10565462B2 (en) | 2020-02-18 |
Family
ID=60412243
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/739,248 Active US10565462B2 (en) | 2016-05-23 | 2017-02-28 | Electronic apparatus, method of controlling electronic apparatus, and program |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US10565462B2 (en) |
| EP (1) | EP3467765B1 (en) |
| JP (1) | JP6904261B2 (en) |
| CN (1) | CN107710275B (en) |
| WO (1) | WO2017203777A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11310411B2 (en) | 2016-08-30 | 2022-04-19 | Sony Semiconductor Solutions Corporation | Distance measuring device and method of controlling distance measuring device |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017150893A (en) | 2016-02-23 | 2017-08-31 | ソニー株式会社 | Ranging module, ranging system, and control method of ranging module |
| JP2020076619A (en) * | 2018-11-07 | 2020-05-21 | ソニーセミコンダクタソリューションズ株式会社 | Floodlight control system, floodlight control method |
| JP7463671B2 (en) * | 2019-08-01 | 2024-04-09 | Toppanホールディングス株式会社 | Distance image capturing device and distance image capturing method |
| CN111882596B (en) * | 2020-03-27 | 2024-03-22 | 东莞埃科思科技有限公司 | Three-dimensional imaging method and device for structured light module, electronic equipment and storage medium |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0338789A (en) | 1989-07-06 | 1991-02-19 | Fuji Electric Co Ltd | Pattern matching system |
| US5960379A (en) * | 1996-11-27 | 1999-09-28 | Fuji Xerox Co., Ltd. | Method of and apparatus for measuring shape |
| US5986745A (en) * | 1994-11-29 | 1999-11-16 | Hermary; Alexander Thomas | Co-planar electromagnetic profile scanner |
| JP2000293627A (en) | 1999-04-02 | 2000-10-20 | Sanyo Electric Co Ltd | Device and method for inputting image and storage medium |
| JP2000307947A (en) | 1999-04-16 | 2000-11-02 | Ricoh Co Ltd | Image processing apparatus and method |
| JP2003323693A (en) | 2002-04-30 | 2003-11-14 | Matsushita Electric Ind Co Ltd | Vehicle navigation system that automatically translates roadside signs and objects |
| US6937235B2 (en) * | 2001-08-09 | 2005-08-30 | Minolta Co., Ltd. | Three-dimensional object surface shape modeling apparatus, method and program |
| US20080024795A1 (en) * | 2006-07-25 | 2008-01-31 | Konica Minolta Sensing, Inc. | Three-dimensional shape measuring system, and three-dimensional shape measuring method |
| US20140375762A1 (en) | 2012-02-17 | 2014-12-25 | Sony Corporation | Information processing apparatus and method, image processing apparatus and method, and program |
| US20150253428A1 (en) * | 2013-03-15 | 2015-09-10 | Leap Motion, Inc. | Determining positional information for an object in space |
| US20160267357A1 (en) * | 2015-03-12 | 2016-09-15 | Care Zone Inc. | Importing Structured Prescription Records from a Prescription Label on a Medication Package |
| US20180292206A1 (en) | 2016-08-30 | 2018-10-11 | Sony Semiconductor Solutions Corporation | Distance measuring device and method of controlling distance measuring device |
| US20180348369A1 (en) | 2016-02-23 | 2018-12-06 | Sony Corporation | Ranging module, ranging system, and method of controlling ranging module |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004040395A (en) * | 2002-07-02 | 2004-02-05 | Fujitsu Ltd | Image distortion correction apparatus, method and program |
| JP4052929B2 (en) * | 2002-07-17 | 2008-02-27 | 株式会社リコー | 3D shape display device, 3D shape display method, program, and recording medium |
| JP2006267031A (en) * | 2005-03-25 | 2006-10-05 | Brother Ind Ltd | 3D input device and 3D input method |
| US7589844B2 (en) * | 2005-07-15 | 2009-09-15 | Asahi Glass Company, Limited | Shape inspection method and apparatus |
| CN101566465B (en) * | 2009-05-18 | 2011-04-06 | 西安交通大学 | Method for measuring object deformation in real time |
| JP6388108B2 (en) * | 2014-03-28 | 2018-09-12 | 日本電気株式会社 | POS terminal device, POS system, information processing system, image recognition method, and image recognition program |
-
2017
- 2017-02-28 WO PCT/JP2017/007827 patent/WO2017203777A1/en not_active Ceased
- 2017-02-28 US US15/739,248 patent/US10565462B2/en active Active
- 2017-02-28 JP JP2017560634A patent/JP6904261B2/en active Active
- 2017-02-28 CN CN201780002150.8A patent/CN107710275B/en active Active
- 2017-02-28 EP EP17802379.2A patent/EP3467765B1/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0338789A (en) | 1989-07-06 | 1991-02-19 | Fuji Electric Co Ltd | Pattern matching system |
| US5986745A (en) * | 1994-11-29 | 1999-11-16 | Hermary; Alexander Thomas | Co-planar electromagnetic profile scanner |
| US5960379A (en) * | 1996-11-27 | 1999-09-28 | Fuji Xerox Co., Ltd. | Method of and apparatus for measuring shape |
| JP2000293627A (en) | 1999-04-02 | 2000-10-20 | Sanyo Electric Co Ltd | Device and method for inputting image and storage medium |
| JP2000307947A (en) | 1999-04-16 | 2000-11-02 | Ricoh Co Ltd | Image processing apparatus and method |
| US6937235B2 (en) * | 2001-08-09 | 2005-08-30 | Minolta Co., Ltd. | Three-dimensional object surface shape modeling apparatus, method and program |
| JP2003323693A (en) | 2002-04-30 | 2003-11-14 | Matsushita Electric Ind Co Ltd | Vehicle navigation system that automatically translates roadside signs and objects |
| US20080024795A1 (en) * | 2006-07-25 | 2008-01-31 | Konica Minolta Sensing, Inc. | Three-dimensional shape measuring system, and three-dimensional shape measuring method |
| US20140375762A1 (en) | 2012-02-17 | 2014-12-25 | Sony Corporation | Information processing apparatus and method, image processing apparatus and method, and program |
| US20150253428A1 (en) * | 2013-03-15 | 2015-09-10 | Leap Motion, Inc. | Determining positional information for an object in space |
| US20160267357A1 (en) * | 2015-03-12 | 2016-09-15 | Care Zone Inc. | Importing Structured Prescription Records from a Prescription Label on a Medication Package |
| US20180348369A1 (en) | 2016-02-23 | 2018-12-06 | Sony Corporation | Ranging module, ranging system, and method of controlling ranging module |
| US20180292206A1 (en) | 2016-08-30 | 2018-10-11 | Sony Semiconductor Solutions Corporation | Distance measuring device and method of controlling distance measuring device |
Non-Patent Citations (3)
| Title |
|---|
| International Preliminary Report on Patentability and English translation thereof dated Dec. 6, 2018, in connection with International Application No. PCT/JP2017/007827. |
| International Search Report and English translation thereof dated May 23, 2017, 2012 in connection with International Application No. PCT/JP2017/007827. |
| Written Opinion and English translation thereof dated May 23, 2017, in connection with International Application No. PCT/JP2017/007827. |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11310411B2 (en) | 2016-08-30 | 2022-04-19 | Sony Semiconductor Solutions Corporation | Distance measuring device and method of controlling distance measuring device |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2017203777A1 (en) | 2019-03-22 |
| US20180189591A1 (en) | 2018-07-05 |
| JP6904261B2 (en) | 2021-07-14 |
| EP3467765A4 (en) | 2020-02-26 |
| EP3467765B1 (en) | 2024-10-23 |
| CN107710275A (en) | 2018-02-16 |
| CN107710275B (en) | 2025-01-17 |
| EP3467765A1 (en) | 2019-04-10 |
| WO2017203777A1 (en) | 2017-11-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10565462B2 (en) | Electronic apparatus, method of controlling electronic apparatus, and program | |
| US11172186B2 (en) | Time-Of-Flight camera system | |
| US10120066B2 (en) | Apparatus for making a distance determination | |
| TWI706152B (en) | Optoelectronic modules for distance measurements and/or multi-dimensional imaging, and methods of obtaining distance and/or three-dimensional data | |
| US20190230306A1 (en) | Digital pixel array with multi-stage readouts | |
| US8159598B2 (en) | Distance estimation apparatus, distance estimation method, storage medium storing program, integrated circuit, and camera | |
| US10698308B2 (en) | Ranging method, automatic focusing method and device | |
| US11610339B2 (en) | Imaging processing apparatus and method extracting a second RGB ToF feature points having a correlation between the first RGB and TOF feature points | |
| US9197808B2 (en) | Image capturing apparatus, method of controlling the same, and storage medium | |
| CN113325439B (en) | Depth camera and depth calculation method | |
| US11209262B2 (en) | Electronic apparatus, control method thereof and computer readable storage medium | |
| JP2012015642A (en) | Imaging device | |
| US9158183B2 (en) | Stereoscopic image generating device and stereoscopic image generating method | |
| US11863735B2 (en) | Camera module | |
| JP2012181757A (en) | Optical information reader | |
| JP6566800B2 (en) | Imaging apparatus and imaging method | |
| US20240395003A1 (en) | Information processing apparatus, image pickup apparatus, information processing method, and storage medium | |
| JP2024177808A (en) | Control device, imaging device, control method, and program | |
| JP2001145124A (en) | 3D image detection device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: SONY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OHKI, MITSUHARU;FURUE, NOBUKI;OHKI, YOSHIHITO;AND OTHERS;SIGNING DATES FROM 20171127 TO 20171204;REEL/FRAME:045125/0384 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |