WO2007049774A1 - Code reader - Google Patents

Code reader Download PDF

Info

Publication number
WO2007049774A1
WO2007049774A1 PCT/JP2006/321581 JP2006321581W WO2007049774A1 WO 2007049774 A1 WO2007049774 A1 WO 2007049774A1 JP 2006321581 W JP2006321581 W JP 2006321581W WO 2007049774 A1 WO2007049774 A1 WO 2007049774A1
Authority
WO
WIPO (PCT)
Prior art keywords
code
image
area
display
size
Prior art date
Application number
PCT/JP2006/321581
Other languages
French (fr)
Inventor
Tetsuya Kuromatsu
Tomonori Irie
Original Assignee
Casio Computer Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casio Computer Co., Ltd. filed Critical Casio Computer Co., Ltd.
Priority to EP06822542A priority Critical patent/EP1941421B1/en
Priority to DE602006021598T priority patent/DE602006021598D1/en
Publication of WO2007049774A1 publication Critical patent/WO2007049774A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K2207/00Other aspects
    • G06K2207/1011Aiming

Definitions

  • the present invention relates to a code reader device for reading a code such as a bar code or a two- dimensional code, and a recording medium.
  • a code reader device for reading a code, such as a bar code or a two-dimensional code, captures the code by an image sensor such as a charge coupled device (CCD) or a complementary metal-oxide semi-conductor (CMOS) , displays the -captured image with a frame on a display screen such as a liquid crystal display (LCD) , detects the code within the frame, and decodes the detected code.
  • an image sensor such as a charge coupled device (CCD) or a complementary metal-oxide semi-conductor (CMOS)
  • LCD liquid crystal display
  • a user adjusts a shooting direction, with the help of the frame displayed on the LCD, such that the code is located within ' the frame.
  • a mobile phone having a camera function, comprising a display device which displays a captured image, a frame displaying device which displays a frame on the captured image, and a bar code extracting device which extracts a bar code image from the inside of the frame on the captured image.
  • An object of the present invention is to improve the -usability of code reading.
  • a code reader device which extracts a code from an image and decodes the code, the image including a code extracting area and a remaining area
  • the code reader device comprises: an image size specifying unit which specifies a display size of the image, prior to extracting the code from the image; and a display unit which displays the image with the display size specified by the image size specifying unit such that the code extracting area is distinguished from the remaining area.
  • FIG. 1 is a front perspective view of a code reader device 1;
  • FIG. 2 is a rear perspective view of the code reader device 1;
  • FIG. 3 is a ( schematic block diagram showing an electric configuration of the code reader device 1;
  • FIG. 4 is a schematic block diagram showing an electric configuration of a camera 11;
  • FIG. 5 is a flow chart showing an operation of code reading processing in the code reader device 1
  • FIG. 6 is a flow chart showing an operation of - preview display processing in the code reader device 1;
  • FIG. 7A is a flow chart showing an operation of frame position setting processing in the code reader device 1;
  • FIG. 7B is a flow chart showing an operation of a modified example of the frame position setting processing
  • FIG. 8 is a flow chart showing an operation of a modified example of preview display processing shown in FIG. 6;
  • FIG. 9A is a table of setting data for each size of a preview image
  • _ FIG. 9B is a view showing sizes of the preview image in comparison to a display screen
  • FIG. 1OA is a view showing examples of adjustment for the angle of view in the captured image
  • FIG. 1OB is a view showing examples of displaying the captured image for each angle of view
  • FIG. HA is a view showing expansion or reduction of an extraction area
  • FIG. HB is a view showing movement of the extraction area
  • FIG. 12 is a view showing a display example of displaying the code extracting target area differently from the remaining area with respect to the luminance.
  • FIGS. 1 to 12 An external configuration of a code reader device 1 is described.
  • FIG. 1 is a front perspective view of a code reader device 1.
  • the code reader device 1 is provided as a camera-equipped portable terminal such as a camera-equipped portable phone (cellular telephone) or a camera-equipped personal digital assistance (PDA).
  • the code reader device 1 includes a display device 14 in an upper half of its front face and an input device 15 on the other area of the front face and on a right side surface.
  • the input device 15 includes a power key 15a, a trigger key 15b, and other various function keys 15c including alphanumeric keys for inputting numbers 0 to 9 or characters such as alphabets, and cursor keys.
  • the input device 15 outputs a key operation signal to a CPU 10 described later with reference to FIG. 3.
  • a plurality of charging terminals 16a are provided at the lower end of the main body of the code reader device 1 for charging power to a power source 16 which is an incorporated secondary battery described later with reference to FIG. 3.
  • FIG. 2 is a rear perspective view of the code reader device 1.
  • the code reader device 1 includes a camera 11 provided at an upper part of its back face.
  • the camera 11 is a digital camera with a built-in optical image sensor such as a charge coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) .
  • a battery cover 16b is provided at the back face of the code reader device 1 for opening and closing a battery housing unit that houses a secondary battery as a power supply 16 described later with reference to FIG. 3.
  • a fixing member (lock knob) 15c which fixes and releases the battery cover 16b is provided at an end part of the battery cover 16b. Subsequently, an internal configuration of the code reader device 1 is described hereinafter.
  • FIG. 3 is a schematic block diagram showing an electric configuration of the code reader device 1.
  • the code reader device 1 is composed of a central processing unit (CPU) 10, the camera 11, a random access memory (RAM) 12, a read only memory (ROM) 13, the display device 14, the input device 15, and the power source l ⁇ ⁇ and the components are interconnected to each other via a bus.
  • the CPU 10 uses the RAM 12 as a work area, expands a variety of control programs or setting data stored in the ROM 13 into the work area, and sequentially executes the programs, thereby controls each unit and device of the code reader device 1.
  • the CPU 10 carries out operational processing described later, on the basis of the operating programs stored in the ROM 13.
  • the operational processing of the CPU 10 is carried out as follows.
  • the camera 11 Prior to extraction of a code from a captured image, a size for displaying the captured image is specified, and a code extracting target area (decode area) is displayed according to the specified size so that the target area is discriminated from remaining area on the captured image.
  • the code extracting target area is a target of code decoding processing on the captured image.
  • the camera 11, as shown in FIG. 4, comprises a sensor unit 111, an optical system driver 112, a driver circuit 113, an image sensor 114, an analog processing circuit 115, an A/D circuit 116, a buffer 117, a signal processing circuit 118, a compressing/decompressing circuit 119, an optical lens 120, and a shutter 121.
  • Image data captured by the image sensor 114 is output after converted into a predetermined format in response to an instruction from the CPU 10.
  • the sensor unit 111 is composed of, not shown in particular, a distance measuring circuit with an infrared-ray projector and a receiver, and an exposure measuring circuit with a photoconductor such as CdS.
  • the sensor unit 111 outputs a distance value and an exposure value of an image capturing object located within an image shooting direction to the CPU 10.
  • the optical system driver 112 drives the optical lens 120 or the shutter 121 by means of a stepping motor, an electromagnetic solenoid, or the like. According to an instruction from the CPU 10 depending on the distance or exposure value obtained by the sensor unit 111, the optical lens 120 is moved so that an image formed on the image sensor 114 is focused on the image capturing object, and the exposure is controlled with the shutter 121 in order that a quantity of light incoming to the image sensor 114 becomes proper.
  • the driver circuit 113 sequentially captures charges generated by photoelectric conversion for each pixels of the image sensor 114 as an image signal, and sends the signal to the analog processing circuit 115.
  • the image sensor 114 is composed of an optical sensor such as a CCD or a CMOS, and outputs an image formed on an imaging area of the image sensor, as an image signal corresponding to each pixel.
  • the analog processing circuit 115 includes a correlated double sampling (CDS) circuit which reduces noise in the image signal, and an automatic gain control (AGC) circuit which amplifies the image signal.
  • CDS correlated double sampling
  • AGC automatic gain control
  • the analog processing circuit carries out required analog signal processing onto the image signal input from the image sensor 114, and outputs the analog image signal to the A/D circuit 116.
  • the A/D circuit 116 converts the analog image signal input from the analog processing circuit 115 into a digital image signal, and outputs the converted digital image signal to the buffer 117.
  • the buffer 117 temporarily stores the digital image signal, and sequentially outputs the signal to the signal processing circuit 118 or compressing/decompressing circuit 119 in response to an instruction from the CPU 10. It is possible to output the digital image signal corresponding to a predetermined area, which is extracted by specifying an address pointer of the stored digital image data, from the buffer 117.
  • the signal processing circuit 118 includes a digital signal processor (DSP) , not shown in particular.
  • DSP digital signal processor
  • the signal processor circuit 118 carries out image processing such as luminance processing, color processing, and detection of an object (detection of a code) in the captured image on the basis of predetermined threshold value, to the input digital image signal.
  • the image processing may include calculating sharpness of a predetermined area of the image so as to output the calculated sharpness into the CPU 10. In this manner, the CPU 10 can adjust a position of the optical lens 120 in a focused state in which the sharpness becomes maximal and a contrast becomes high.
  • the compressing/decompressing circuit 119 compresses and decompresses the digital input image signal with a predetermined encoding scheme, and outputs the resulting signal to the CPU 10.
  • Both of a lossless or lossy compression/decompression system may be used. For example, Adaptive Discrete Cosine
  • ADCT Joint Photographic Experts Group
  • JPEG Joint Photographic Experts Group
  • the optical lens 120 adjusts a lens position by the optical system driver 112, and forms (focuses) an image of a capturing object onto an imaging area on the image sensor 114.
  • the optical lens 120 may be configured to adjust an angle of view by adjustment of lens allocation with the optical system driver 112 using a plurality of lenses, and to enable optical zoom. This angle-of-view adjustment using the optical lens 120 is explicitly described as "the optical zoom" in order to discriminate it from a description relating to an angle of view described later.
  • the shutter 121 With respect to the shutter 121, a plurality of shutter vanes (not shown in particular) are located between the optical lens 120 and the image sensor 114, and the shutter vanes are driven by the optical system driver 112, thereby a quantity of passing light can be controlled.
  • the display device 14 comprises a display screen composed of a liquid crystal display (LCD) .
  • the display device 14 displays an image on the display screen by required display processing according to display signals input from the CPU 10.
  • the display screen is not limited to the LCD.
  • the display device may be composed of another display element which can be built in a portable terminal.
  • the input device 15 has the power key 15a, the trigger key 15b, and other various function keys 15c such as cursor keys, as shown in FIG. 1, and outputs a key operating signal to the CPU 10.
  • the input device 15 may include a rotating body such as a dial key or a rotary trackball which outputs a rotational position of a rotating body as an operating signal, other than the operating keys.
  • a level of increase and decrease such as expansion and reduction
  • a moving direction such as up, down, left or right direction
  • the power source 16 supplies electric power from a battery power source (not shown) into each unit and device of the code reader device 1 in response to an instruction from the CPU 10 or an operation of the power key.
  • the battery power source houses a built-in secondary batteries such as a nickel-cadmium accumulator, a' nickel-hydride battery, or a lithium -ion battery.
  • the built-in secondary batteries can be charged by a charger connected to the plurality of charging terminals 16a.
  • the battery power source is not limited to the secondary battery, and may be primary batteries such as an alkaline dry battery or a manganese dry battery.
  • step SIl the camera 11 performs . initialization thereof including movement to an initial position of the optical lens 120 and shutter 121, and resetting of the buffer 117.
  • step S12 the flow goes to step S12 in which preview display processing is executed to display a captured image on the screen, prior to the extraction of a code from the captured image. Details of the preview display processing in step
  • a size of a preview image on 'a display screen is set in step S31 according to an instruction from the input device 15, as shown in FIG. 6.
  • a width and a height of the preview image to be displayed are set for each size of the preview image.
  • the data table stores setting data of an image size (display area) for each zoom size (angle of view) , and setting data of a code extracting target area (width and height) in the preview image, as size information required for zoom size setting processing to be described later.
  • the data table is stored in advance in the ROM 13 shown in FIG. 3.
  • a selection screen is displayed for size selection of a preview image on the basis of size information such as 4/9 VGA (Video Graphics Array), 1/4 VGA, 1/9 VGA, or 1/16 VGA stored in advance in the data table shown in FIG. 9A.
  • size information data corresponding to the instruction is read out to the RAM 12, and the size of a preview image (an image area) is set.
  • coordinates of a preview image display area i.e., coordinates of the four corners of the display area are set in accordance with the predetermined data or the user operating instruction.
  • image display is performed as follows. That is, as shown in FIG. 9B, a preview image G is displayed when VGA is set, a preview image Al is displayed when 1/9 VGA is set, a preview image A2 is displayed when 1/4 VGA is set, and a preview image A3 is displayed when 4/9 VGA is set. Accordingly, to check the code to be decoded, a preview image can be displayed in a large size. In the case of displaying another information such as a resultant of the decoding, a preview image can be displayed in small size. In this manner, the display size of the captured image can be selected according to the type of usage. Thus, the usability is improved. Further, because a starting position of the image display can be set by the user, it becomes more flexible to arrange the screen layout.
  • step S33 setting a frame display is executed to display a frame for distinguishing a code extracting target area ⁇ from the remaining area.
  • step S34 setting of a zoom size (angle of view) is executed in accordance with data stored in advance or the operating instruction by the user.
  • a selection screen is displayed for selecting the zoom size (angle of view) stored in the data table shown in FIG. 9A.
  • step S35 setting processing for displaying the frame is performed.
  • the frame is displayed in order to distinguish a code extracting target area from the remaining area. Details of the setting processing for displaying the frame in step S35 is shown in FIG. 7A.
  • the following processing is executed as shown in FIG. 7A. That is, an input operation from the input device 15 is acquired (step S51), and it is determined whether frame expansion or reduction is instructed (step S52) .
  • step S52 display position data of the frame is set to a value corresponding to the expansion
  • step S54 frame display position data is set to a value corresponding to the reduction. Subsequently the processing terminates.
  • the frame expansion/reduction setting is performed in such a way that the size data of the code extracting target area, and the size data of the imaging area in the image sensor 114 corresponding to the code extracting target area, not shown in particular, are converted into the data corresponding to the instructed expansion/reduction (refer to FIG. 9A). Due to the frame display expansion/reduction setting processing, as shown in FIG. HA, the frame area, which is a target area of the code extraction, indicated by a code extracting boundary frame W in the preview image A before the setting, is expanded as in the preview image A21, or reduced as in the preview image A22.
  • the frame display setting processing can be modified as shown in FIG. 7B. That is, an instruction indicating a moving direction by the cursor key or the like from the input device 15 is acquired (step S61), and the display position of the frame is determined on the basis of the acquired moving direction and a distance corresponding to a key depressing time or, the like (step S62) .
  • the setting for the movement of the frame display position is performed i-n such a manner that the position data of the code extracting target area, and the position data of the imaging area in the image sensor 114 corresponds to the code extracting target area (both are not shown) , are processed to be shifted based on the instructed movement direction and distance.
  • the area of the code extracting target can be moved to an arbitrary location in the displayed captured image (In FIG. HB, a preview image A32 shows the frame movement to the right) .
  • the frame expansion/reduction processing and the frame position movement processing described above may be used in combination with each other.
  • an imaging size setting of the camera such as an optical zoom is set (step S36) .
  • the frame setting data for each zoom size is calculated based on the preview size, zoom size, and frame display setting, which are determined during the above- described processing (step S37).
  • An area corresponding to the setting zoom size is read out from the buffer 117, whereby the preview image is acquired (step S38).
  • a frame is combined with the preview image based on the calculated frame display data (step S39) .
  • the preview image is displayed on the screen of the display device 14 according to the setting size and position (step S40). It is determined whether a termination of the preview display is instructed with the operating instruction from the input device 15 (step S41) . When the termination is not instructed, the current processing reverts to step S34. When the termination is instructed, preview display processing is terminated.
  • step S37 is replaced by the calculation of the data representing an area in which luminance is changed, for each zoom size (step S371) .
  • the calculation is performed on the basis of the preview size, the zoom size, and the frame display position, or the like.
  • the position of the luminance change area is identical to the code extracting target area.
  • step S39 is replaced by step S391 in which reduction of display luminance in the captured image except the code extracting target area is performed, according to comparative operation between the data of the luminance change area and the data of the imaging area of the image sensor 114.
  • an ordinary preview image A41 can be replaced by a preview image A42 obtained by reducing the display luminance outside the code extracting target -area.
  • the luminance of the code extracting target area may be reduced in reverse, it is preferable to reduce the luminance outside the code extracting target area in order that the code extracting target area is easily recognized.
  • changing a color tone of the code extracting target area can be applied.
  • step S13 a lens aperture (a focus adjustment may be included) of the camera 11 or the like is set.
  • step S14 an imaging size of the decoding target (decode area) is set based on the predetermined values or the resultant setting values of the frame display position setting processing (step S35). That is, the imaging size is set according to the predetermined data or the size and position , data of the code extracting target area on the imaging area of the image sensor 114.
  • the image of the target area (decode area) set by the above-described processing is captured from the stored image in the buffer 117 (step S15) .
  • Image data of the captured decode area is stored in a memory area in the RAM 12 (step S16) .
  • the decode area is displayed on the display device 14 to be .checked (step S17) .
  • The, decoding processing for extracting and decoding the code from the decode area ' is carried out (step S1-8) .
  • step S18 it is determined whether or not the decode processing in step S18 is succeeded, namely, whether or not the information represented by the code such as a predetermined character string is acquired by decoding (step S19).
  • the information is not acquired, it is determined whether or not a predetermined time is elapsed from the initialization of camera (step S20) .
  • the flow returns to step S15.
  • the flow terminates.
  • the flow may return to step S13.
  • the code reader device 1 prior to the code extraction from the image captured by the camera 11, receives the display size of the preview image from the input device 15 for displaying on the display device 14. In response to the image size, based on the setting data stored in the ROM 13, the code extracting target area is displayed on the display device 14 so as to be discriminated with the remaining area of the captured image.
  • the layout of the captured image on the display screen becomes flexible, so that the usability for the code reading can be improved.
  • the code reader device 1 displays on the display device 14 the frame indicating the boundary between the code extracting target area and the remaining area in the captured image, therefore it can be easy to recognize the code extracting target area.
  • the code reader device 1 may display on the display device 14 the preview image such that the luminance of the code extracting target area differs from that of the remaining area. Accordingly the user can recognize a difference between the displayed areas as a plane instead of the boundary line, thus it may be easier to check the code extracting target area.
  • the code reader device 1 adjusts the zoom size
  • the display according to situations can be carried out in a suitable size.
  • the code reader device 1 specifies the size and position for the code extracting target area of the captured image according to the instruction of the user from the input device 15, to adjust the code extracting target area based on the specified size and position, and to display the adjusted target area on the display device 14 at a suitable size and position. Therefore, when the user adjusts the code extracting target area, the adjusting status can be recognized on the screen.
  • the present invention is not limited to the above- described embodiments.
  • the detailed configuration and operation of the code reader device 1 in the embodiments may be modified without departing from the spirit of the invention.
  • the code reader device 1 may include, other than the RAM 12 and the ROM 13, a hard disk drive (HDD) , a nonvolatile memory, and a media drive for an optical/magnetic storage media.
  • the data for displaying the captured image such as the preview image size, the zoom size (angle of view) of the preview image, the size of the code extracting target area, the position of the code extracting target area, and the like may be stored in advance in the above-described storages, instead of specifying the data by the operation form the input device 15.
  • the code reader device 1 may calculate an intermediate value of the predetermined discrete values and smoothly set the setting data based on the calculated values.
  • the code reader device specifies a display size of the captured image, and displays a code extracting target area according to the specified display size so that the target area is discriminated from the remaining area.
  • a layout of the captured image on the display device becomes flexible, so that the usability for code reading can be improved.
  • the code reader device 1 is not limited to a portable terminal housing a digital camera or a handy terminal housing a digital camera. That is, according to another embodiment of the present invention, the code reader device 1 may be a portable phone (cellular phone) housing a digital camera or a personal digital assistant (PDA) housing a digital camera.
  • a portable phone cellular phone
  • PDA personal digital assistant

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

A code reader device (1), prior to extracting a code from an image captured by a camera (11), receives an image size data, and according to the received image size, displays the captured image on the display device (14) so that a code extracting target area is discriminated from a remaining area. A layout of the captured image on a display screen becomes flexible, so that usability for reading the code can be improved.

Description

D E S C R I P T I O N
CODE READER
Technical Field
The present invention relates to a code reader device for reading a code such as a bar code or a two- dimensional code, and a recording medium.
Background Art Conventionally, a code reader device for reading a code, such as a bar code or a two-dimensional code, captures the code by an image sensor such as a charge coupled device (CCD) or a complementary metal-oxide semi-conductor (CMOS) , displays the -captured image with a frame on a display screen such as a liquid crystal display (LCD) , detects the code within the frame, and decodes the detected code. In order for the code reader device to properly read the code, a user adjusts a shooting direction, with the help of the frame displayed on the LCD, such that the code is located within 'the frame.
For example, in Jpn. Pat. Appln. KOKAI Publication No. 2005-4642, as a technique similar to the code reader device described above, there is disclosed a mobile phone having a camera function, comprising a display device which displays a captured image, a frame displaying device which displays a frame on the captured image, and a bar code extracting device which extracts a bar code image from the inside of the frame on the captured image.
However, with the -camera-equipped mobile phone, it is difficult to arrange a layout flexibly on a display screen such that the captured code is displayed with an enlarged size, and an area indicating the decoded information or operation information is displayed with a reduced size. In other words, there is a problem that a usability of this kind of camera is not satisfactory.
Disclosure of Invention
An object of the present invention is to improve the -usability of code reading. According to one aspect of the present invention, there is provided a code reader device which extracts a code from an image and decodes the code, the image including a code extracting area and a remaining area, the code reader device comprises: an image size specifying unit which specifies a display size of the image, prior to extracting the code from the image; and a display unit which displays the image with the display size specified by the image size specifying unit such that the code extracting area is distinguished from the remaining area. Brief Description of Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments -of the present invention and, together with the general description given above and the detailed description of the embodiments given below, serve to explain the principles of the present invention in which:
FIG. 1 is a front perspective view of a code reader device 1;
FIG. 2 is a rear perspective view of the code reader device 1;
FIG. 3 is a( schematic block diagram showing an electric configuration of the code reader device 1; FIG. 4 is a schematic block diagram showing an electric configuration of a camera 11;
FIG. 5 is a flow chart showing an operation of code reading processing in the code reader device 1; FIG. 6 is a flow chart showing an operation of - preview display processing in the code reader device 1;
FIG. 7A is a flow chart showing an operation of frame position setting processing in the code reader device 1;
FIG. 7B is a flow chart showing an operation of a modified example of the frame position setting processing;
FIG. 8 is a flow chart showing an operation of a modified example of preview display processing shown in FIG. 6;
FIG. 9A is a table of setting data for each size of a preview image; _ FIG. 9B is a view showing sizes of the preview image in comparison to a display screen;
FIG. 1OA is a view showing examples of adjustment for the angle of view in the captured image;
FIG. 1OB is a view showing examples of displaying the captured image for each angle of view;
FIG. HA is a view showing expansion or reduction of an extraction area;
FIG. HB is a view showing movement of the extraction area; and FIG. 12 is a view showing a display example of displaying the code extracting target area differently from the remaining area with respect to the luminance.
Best Mode for Carrying Out the Invention Embodiments of a code reader device according to the present invention will now be described with reference to FIGS. 1 to 12. The present invention is not limited to the embodiments and terms used for explaining the embodiments are not limited to the terms in the following description. First, an external configuration of a code reader device 1 is described.
FIG. 1 is a front perspective view of a code reader device 1. The code reader device 1 is provided as a camera-equipped portable terminal such as a camera-equipped portable phone (cellular telephone) or a camera-equipped personal digital assistance (PDA). The code reader device 1 includes a display device 14 in an upper half of its front face and an input device 15 on the other area of the front face and on a right side surface. The input device 15 includes a power key 15a, a trigger key 15b, and other various function keys 15c including alphanumeric keys for inputting numbers 0 to 9 or characters such as alphabets, and cursor keys. The input device 15 outputs a key operation signal to a CPU 10 described later with reference to FIG. 3. - A plurality of charging terminals 16a are provided at the lower end of the main body of the code reader device 1 for charging power to a power source 16 which is an incorporated secondary battery described later with reference to FIG. 3.
FIG. 2 is a rear perspective view of the code reader device 1. The code reader device 1 includes a camera 11 provided at an upper part of its back face. The camera 11 is a digital camera with a built-in optical image sensor such as a charge coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) . A battery cover 16b is provided at the back face of the code reader device 1 for opening and closing a battery housing unit that houses a secondary battery as a power supply 16 described later with reference to FIG. 3. A fixing member (lock knob) 15c which fixes and releases the battery cover 16b is provided at an end part of the battery cover 16b. Subsequently, an internal configuration of the code reader device 1 is described hereinafter.
FIG. 3 is a schematic block diagram showing an electric configuration of the code reader device 1. As shown in FIG. 3, the code reader device 1 is composed of a central processing unit (CPU) 10, the camera 11, a random access memory (RAM) 12, a read only memory (ROM) 13, the display device 14, the input device 15, and the power source lβΛ and the components are interconnected to each other via a bus. The CPU 10 uses the RAM 12 as a work area, expands a variety of control programs or setting data stored in the ROM 13 into the work area, and sequentially executes the programs, thereby controls each unit and device of the code reader device 1. Specifically, the CPU 10 carries out operational processing described later, on the basis of the operating programs stored in the ROM 13. The operational processing of the CPU 10 is carried out as follows. Prior to extraction of a code from a captured image, a size for displaying the captured image is specified, and a code extracting target area (decode area) is displayed according to the specified size so that the target area is discriminated from remaining area on the captured image. The code extracting target area is a target of code decoding processing on the captured image. The camera 11, as shown in FIG. 4, comprises a sensor unit 111, an optical system driver 112, a driver circuit 113, an image sensor 114, an analog processing circuit 115, an A/D circuit 116, a buffer 117, a signal processing circuit 118, a compressing/decompressing circuit 119, an optical lens 120, and a shutter 121. Image data captured by the image sensor 114 is output after converted into a predetermined format in response to an instruction from the CPU 10.
The sensor unit 111 is composed of, not shown in particular, a distance measuring circuit with an infrared-ray projector and a receiver, and an exposure measuring circuit with a photoconductor such as CdS. The sensor unit 111 outputs a distance value and an exposure value of an image capturing object located within an image shooting direction to the CPU 10.
The optical system driver 112 drives the optical lens 120 or the shutter 121 by means of a stepping motor, an electromagnetic solenoid, or the like. According to an instruction from the CPU 10 depending on the distance or exposure value obtained by the sensor unit 111, the optical lens 120 is moved so that an image formed on the image sensor 114 is focused on the image capturing object, and the exposure is controlled with the shutter 121 in order that a quantity of light incoming to the image sensor 114 becomes proper. ( The driver circuit 113 sequentially captures charges generated by photoelectric conversion for each pixels of the image sensor 114 as an image signal, and sends the signal to the analog processing circuit 115. The image sensor 114 is composed of an optical sensor such as a CCD or a CMOS, and outputs an image formed on an imaging area of the image sensor, as an image signal corresponding to each pixel.
The analog processing circuit 115 includes a correlated double sampling (CDS) circuit which reduces noise in the image signal, and an automatic gain control (AGC) circuit which amplifies the image signal. The analog processing circuit carries out required analog signal processing onto the image signal input from the image sensor 114, and outputs the analog image signal to the A/D circuit 116.
The A/D circuit 116 converts the analog image signal input from the analog processing circuit 115 into a digital image signal, and outputs the converted digital image signal to the buffer 117. The buffer 117 temporarily stores the digital image signal, and sequentially outputs the signal to the signal processing circuit 118 or compressing/decompressing circuit 119 in response to an instruction from the CPU 10. It is possible to output the digital image signal corresponding to a predetermined area, which is extracted by specifying an address pointer of the stored digital image data, from the buffer 117.
The signal processing circuit 118 includes a digital signal processor (DSP) , not shown in particular. The signal processor circuit 118 carries out image processing such as luminance processing, color processing, and detection of an object (detection of a code) in the captured image on the basis of predetermined threshold value, to the input digital image signal. The image processing may include calculating sharpness of a predetermined area of the image so as to output the calculated sharpness into the CPU 10. In this manner, the CPU 10 can adjust a position of the optical lens 120 in a focused state in which the sharpness becomes maximal and a contrast becomes high. The compressing/decompressing circuit 119 compresses and decompresses the digital input image signal with a predetermined encoding scheme, and outputs the resulting signal to the CPU 10. Both of a lossless or lossy compression/decompression system may be used. For example, Adaptive Discrete Cosine
Transform (ADCT) , or conversion to Joint Photographic Experts Group (JPEG) image using Huffman encoding processing, which is an entropy encoding scheme, can be acceptable .
The optical lens 120 adjusts a lens position by the optical system driver 112, and forms (focuses) an image of a capturing object onto an imaging area on the image sensor 114. The optical lens 120 may be configured to adjust an angle of view by adjustment of lens allocation with the optical system driver 112 using a plurality of lenses, and to enable optical zoom. This angle-of-view adjustment using the optical lens 120 is explicitly described as "the optical zoom" in order to discriminate it from a description relating to an angle of view described later. With respect to the shutter 121, a plurality of shutter vanes (not shown in particular) are located between the optical lens 120 and the image sensor 114, and the shutter vanes are driven by the optical system driver 112, thereby a quantity of passing light can be controlled. The display device 14 comprises a display screen composed of a liquid crystal display (LCD) . The display device 14 displays an image on the display screen by required display processing according to display signals input from the CPU 10. The display screen is not limited to the LCD. The display device may be composed of another display element which can be built in a portable terminal.
The input device 15 has the power key 15a, the trigger key 15b, and other various function keys 15c such as cursor keys, as shown in FIG. 1, and outputs a key operating signal to the CPU 10. The input device 15 may include a rotating body such as a dial key or a rotary trackball which outputs a rotational position of a rotating body as an operating signal, other than the operating keys. In particular, in order to instruct a level of increase and decrease, such as expansion and reduction, or in order to instruct a moving direction such as up, down, left or right direction, it is preferable to accept the operating instruction from the rotating body.
The power source 16 supplies electric power from a battery power source (not shown) into each unit and device of the code reader device 1 in response to an instruction from the CPU 10 or an operation of the power key. The battery power source houses a built-in secondary batteries such as a nickel-cadmium accumulator, a' nickel-hydride battery, or a lithium -ion battery. In addition, the built-in secondary batteries can be charged by a charger connected to the plurality of charging terminals 16a. The battery power source is not limited to the secondary battery, and may be primary batteries such as an alkaline dry battery or a manganese dry battery.
Subsequently, an operation of code reading processing in the code reader device 1 is described with reference to a flow chart shown in FIG. 5. As shown in FIG. 5, in step SIl, the camera 11 performs . initialization thereof including movement to an initial position of the optical lens 120 and shutter 121, and resetting of the buffer 117. Next, the flow goes to step S12 in which preview display processing is executed to display a captured image on the screen, prior to the extraction of a code from the captured image. Details of the preview display processing in step
S12 is shown in FIG. 6.
When the preview display processing is started, a size of a preview image on 'a display screen is set in step S31 according to an instruction from the input device 15, as shown in FIG. 6.
In a data table shown in FIG. 9A, a width and a height of the preview image to be displayed are set for each size of the preview image. The data table stores setting data of an image size (display area) for each zoom size (angle of view) , and setting data of a code extracting target area (width and height) in the preview image, as size information required for zoom size setting processing to be described later. The data table is stored in advance in the ROM 13 shown in FIG. 3.
A selection screen is displayed for size selection of a preview image on the basis of size information such as 4/9 VGA (Video Graphics Array), 1/4 VGA, 1/9 VGA, or 1/16 VGA stored in advance in the data table shown in FIG. 9A. When a user inputs a selection instruction from the input device 15, size information data corresponding to the instruction is read out to the RAM 12, and the size of a preview image (an image area) is set. Then, in step S32, coordinates of a preview image display area, i.e., coordinates of the four corners of the display area are set in accordance with the predetermined data or the user operating instruction.
In accordance with the size of the preview image set by the processing described above, image display is performed as follows. That is, as shown in FIG. 9B, a preview image G is displayed when VGA is set, a preview image Al is displayed when 1/9 VGA is set, a preview image A2 is displayed when 1/4 VGA is set, and a preview image A3 is displayed when 4/9 VGA is set. Accordingly, to check the code to be decoded, a preview image can be displayed in a large size. In the case of displaying another information such as a resultant of the decoding, a preview image can be displayed in small size. In this manner, the display size of the captured image can be selected according to the type of usage. Thus, the usability is improved. Further, because a starting position of the image display can be set by the user, it becomes more flexible to arrange the screen layout.
Next, in step S33, setting a frame display is executed to display a frame for distinguishing a code extracting target area ^from the remaining area. In step S34, setting of a zoom size (angle of view) is executed in accordance with data stored in advance or the operating instruction by the user. A selection screen is displayed for selecting the zoom size (angle of view) stored in the data table shown in FIG. 9A. When the user inputs a selection instruction from the input device 15, accordingly the zoom size is set. As shown in FIG. 1OA, in the case where xl.O is set, a preview image All is extracted, in the case where xl.5 is set, a preview image A12 is extracted, and in the case where x2.0 is set, a preview image A13 is extracted, from the entire imaging area including a code C. As shown in FIG. 1OB, the extracted zoom image is displayed on the screen.
In step S35, setting processing for displaying the frame is performed. The frame is displayed in order to distinguish a code extracting target area from the remaining area. Details of the setting processing for displaying the frame in step S35 is shown in FIG. 7A. When the frame display setting processing is started, the following processing is executed as shown in FIG. 7A. That is, an input operation from the input device 15 is acquired (step S51), and it is determined whether frame expansion or reduction is instructed (step S52) . When expansion is instructed, display position data of the frame is set to a value corresponding to the expansion (step S53). When reduction is instructed, frame display position data is set to a value corresponding to the reduction (step S54). Subsequently the processing terminates.
The frame expansion/reduction setting is performed in such a way that the size data of the code extracting target area, and the size data of the imaging area in the image sensor 114 corresponding to the code extracting target area, not shown in particular, are converted into the data corresponding to the instructed expansion/reduction (refer to FIG. 9A). Due to the frame display expansion/reduction setting processing, as shown in FIG. HA, the frame area, which is a target area of the code extraction, indicated by a code extracting boundary frame W in the preview image A before the setting, is expanded as in the preview image A21, or reduced as in the preview image A22.
The frame display setting processing can be modified as shown in FIG. 7B. That is, an instruction indicating a moving direction by the cursor key or the like from the input device 15 is acquired (step S61), and the display position of the frame is determined on the basis of the acquired moving direction and a distance corresponding to a key depressing time or, the like (step S62) .
The setting for the movement of the frame display position is performed i-n such a manner that the position data of the code extracting target area, and the position data of the imaging area in the image sensor 114 corresponds to the code extracting target area (both are not shown) , are processed to be shifted based on the instructed movement direction and distance.
By the setting processing for the movement of the frame display position, as shown in FIG. HB, the area of the code extracting target can be moved to an arbitrary location in the displayed captured image (In FIG. HB, a preview image A32 shows the frame movement to the right) .
As a modified example of the frame display setting processing, the frame expansion/reduction processing and the frame position movement processing described above may be used in combination with each other.
Following step S35, an imaging size setting of the camera such as an optical zoom is set (step S36) . The frame setting data for each zoom size is calculated based on the preview size, zoom size, and frame display setting, which are determined during the above- described processing (step S37). An area corresponding to the setting zoom size is read out from the buffer 117, whereby the preview image is acquired (step S38). A frame is combined with the preview image based on the calculated frame display data (step S39) . The preview image is displayed on the screen of the display device 14 according to the setting size and position (step S40). It is determined whether a termination of the preview display is instructed with the operating instruction from the input device 15 (step S41) . When the termination is not instructed, the current processing reverts to step S34. When the termination is instructed, preview display processing is terminated.
The preview display processing shown in FIG.- 6 may be modified as follows. As shown in FIG. 8, step S37 is replaced by the calculation of the data representing an area in which luminance is changed, for each zoom size (step S371) . The calculation is performed on the basis of the preview size, the zoom size, and the frame display position, or the like. The position of the luminance change area is identical to the code extracting target area. Moreover, step S39 is replaced by step S391 in which reduction of display luminance in the captured image except the code extracting target area is performed, according to comparative operation between the data of the luminance change area and the data of the imaging area of the image sensor 114.
In the modified example of the preview display processing, as shown in FIG. 12, an ordinary preview image A41 can be replaced by a preview image A42 obtained by reducing the display luminance outside the code extracting target -area. As a consequence, even under the situation in which a frame line indicating the boundary is tend to merge into an outside scene (background) , the code extracting target area can be easily recognized.
Although the luminance of the code extracting target area may be reduced in reverse, it is preferable to reduce the luminance outside the code extracting target area in order that the code extracting target area is easily recognized. In addition to the luminance change, changing a color tone of the code extracting target area can be applied.
Now, an operation of code reading processing shown in the flow chart shown in FIG. 5 is described hereinafter.
As described above, when the preview display processing shown in step S12 terminates, the flow then goes to step S13. In the step S13, a lens aperture (a focus adjustment may be included) of the camera 11 or the like is set. In the next step S14, an imaging size of the decoding target (decode area) is set based on the predetermined values or the resultant setting values of the frame display position setting processing (step S35). That is, the imaging size is set according to the predetermined data or the size and position , data of the code extracting target area on the imaging area of the image sensor 114.
Next, in accordance with the setting of the imaging size, the image of the target area (decode area) set by the above-described processing is captured from the stored image in the buffer 117 (step S15) . Image data of the captured decode area is stored in a memory area in the RAM 12 (step S16) . On the basis of the stored image data, the decode area is displayed on the display device 14 to be .checked (step S17) . The, decoding processing for extracting and decoding the code from the decode area 'is carried out (step S1-8) .
Subsequently, it is determined whether or not the decode processing in step S18 is succeeded, namely, whether or not the information represented by the code such as a predetermined character string is acquired by decoding (step S19). In the case where the information is not acquired, it is determined whether or not a predetermined time is elapsed from the initialization of camera (step S20) . When it is determined that the predetermined time is not elapsed, the flow returns to step S15. When it is determined that the predetermined time is elapsed, or when the decoding is succeeded, the flow terminates. When it is determined that the predetermined time is not elapsed, the flow may return to step S13. As described above, according to the embodiment, prior to the code extraction from the image captured by the camera 11, the code reader device 1 receives the display size of the preview image from the input device 15 for displaying on the display device 14. In response to the image size, based on the setting data stored in the ROM 13, the code extracting target area is displayed on the display device 14 so as to be discriminated with the remaining area of the captured image. Thus, the layout of the captured image on the display screen becomes flexible, so that the usability for the code reading can be improved.
The code reader device 1 displays on the display device 14 the frame indicating the boundary between the code extracting target area and the remaining area in the captured image, therefore it can be easy to recognize the code extracting target area.
The code reader device 1 may display on the display device 14 the preview image such that the luminance of the code extracting target area differs from that of the remaining area. Accordingly the user can recognize a difference between the displayed areas as a plane instead of the boundary line, thus it may be easier to check the code extracting target area. The code reader device 1 adjusts the zoom size
(angle of view) of the captured image by adjusting the extraction area from the imaging area in the image sensor 114, and to display on the display device 14 the captured image subjected to the zoom size (angle of view) adjustment. Therefore, in the case of detecting the code from a wide range of the captured image, or in the case of locating the code precisely in the code extracting target area, the display according to situations can be carried out in a suitable size.
In addition, the code reader device 1 specifies the size and position for the code extracting target area of the captured image according to the instruction of the user from the input device 15, to adjust the code extracting target area based on the specified size and position, and to display the adjusted target area on the display device 14 at a suitable size and position. Therefore, when the user adjusts the code extracting target area, the adjusting status can be recognized on the screen.
The present invention is not limited to the above- described embodiments. The detailed configuration and operation of the code reader device 1 in the embodiments may be modified without departing from the spirit of the invention.
For example, the code reader device 1 may include, other than the RAM 12 and the ROM 13, a hard disk drive (HDD) , a nonvolatile memory, and a media drive for an optical/magnetic storage media. The data for displaying the captured image such as the preview image size, the zoom size (angle of view) of the preview image, the size of the code extracting target area, the position of the code extracting target area, and the like may be stored in advance in the above-described storages, instead of specifying the data by the operation form the input device 15.
In order to set the preview size, the zoom size, the frame display position and the like, instead of selecting the data from the values predetermined discretely, the code reader device 1 may calculate an intermediate value of the predetermined discrete values and smoothly set the setting data based on the calculated values.
According to the present invention, the code reader device specifies a display size of the captured image, and displays a code extracting target area according to the specified display size so that the target area is discriminated from the remaining area. As a consequence, a layout of the captured image on the display device becomes flexible, so that the usability for code reading can be improved.
While the description above refers to particular embodiments of the present invention, it will be understood that many modifications may be made without departing from the spirit thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention. The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. For example, the code reader device 1 is not limited to a portable terminal housing a digital camera or a handy terminal housing a digital camera. That is, according to another embodiment of the present invention, the code reader device 1 may be a portable phone (cellular phone) housing a digital camera or a personal digital assistant (PDA) housing a digital camera.

Claims

C L A I M S
1. A code reader device which extracts a code from an image and decodes the code, the image including a code extracting area -and a remaining area, the code reader device comprising: an image size specifying unit which specifies a display size of the image, prior to extracting the code from the image; and a display unit which displays the image with the display size specified by the image size specifying unit such that the code extracting area is distinguished from the remaining area.
2. The code reader device according to claim 1, wherein the display unit displays a frame on a boundary between the code extracting area on and the remaining area .
3. The code reader device according to claim 1, wherein the display unit displays the code extracting area with a first luminance level and the remaining area with a second luminance level.
4. The code reader device according to claim 1, further comprising an angle of view adjusting unit which adjusts an angle of view of the image displayed by the display unit.
5. The code reader device according to claim 1, further comprising an area size specifying unit which specifies a display size of the code extracting area, and wherein the display unit displays the code extracting area based on the display size specified by the area size specifying unit.
6. The code reader device according to claim 1, further comprising an area position specifying unit which specifies a display position of the code extracting area, and wherein the display unit displays the code extracting area based on the display position specified by the area position specifying unit.
7. A computer readable recording medium to store program instructions for execution on a computer system, which is used as a code reader device which extracts a code from an image and decodes the code, the image including a code extracting area and a remaining area, enabling the computer system to perform: specifying a display size of the image, prior to extracting the code from the image; and displaying the image with the display size specified by the image size specifying unit such that the code extracting area is distinguished from the remaining area.
8. A portable terminal which extracts a code from an image and decodes the code, the image including a code extracting area and a remaining area, the portable terminal comprising: an image size specifying unit which specifies a display size of the image, prior to extracting the code from the image; and a display unit which displays the image with the display size specified by the image size specifying unit such that the code extracting area is distinguished from the remaining area.
PCT/JP2006/321581 2005-10-26 2006-10-23 Code reader WO2007049774A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP06822542A EP1941421B1 (en) 2005-10-26 2006-10-23 Code reader
DE602006021598T DE602006021598D1 (en) 2005-10-26 2006-10-23 CODE SCANNER

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005-310968 2005-10-26
JP2005310968A JP4569441B2 (en) 2005-10-26 2005-10-26 Code reader and program

Publications (1)

Publication Number Publication Date
WO2007049774A1 true WO2007049774A1 (en) 2007-05-03

Family

ID=37685608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/321581 WO2007049774A1 (en) 2005-10-26 2006-10-23 Code reader

Country Status (5)

Country Link
US (2) US20070090190A1 (en)
EP (1) EP1941421B1 (en)
JP (1) JP4569441B2 (en)
DE (1) DE602006021598D1 (en)
WO (1) WO2007049774A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008039708A1 (en) * 2006-09-26 2008-04-03 Symbol Technologies, Inc. System and method for an image decoder with feedback
GB2466239A (en) * 2008-12-10 2010-06-23 Ben John Dixon Whitaker A system for adapting the size of a barcode image to suit a display area
WO2013189487A1 (en) * 2012-06-18 2013-12-27 Wipotec Wiege- Und Positioniersysteme Gmbh Monitoring device for an identification, having a detecting and processing arrangement for detecting the identification
GB2533692A (en) * 2014-12-12 2016-06-29 Hand Held Prod Inc Auto-contrast viewfinder for an indicia reader
WO2024023495A1 (en) * 2022-07-26 2024-02-01 Quantum Base Limited Method of reading an optically readable security element
WO2024023494A1 (en) * 2022-07-26 2024-02-01 Quantum Base Limited Method of reading an optically readable security element

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009031175A (en) * 2007-07-30 2009-02-12 Hitachi High-Technologies Corp Automatic analyzer
JP5217872B2 (en) * 2008-10-07 2013-06-19 カシオ計算機株式会社 Symbol reader and program
JP5690724B2 (en) * 2009-06-04 2015-03-25 ユニバーサル・バイオ・リサーチ株式会社 Sample testing apparatus and method
JP2011165139A (en) * 2010-02-15 2011-08-25 Toshiba Tec Corp Code symbol reading apparatus and control program
CN102279922B (en) * 2010-06-12 2014-07-16 晨星软件研发(深圳)有限公司 Bar code image recognition system applied to handheld device and relevant method
JP2012018494A (en) * 2010-07-07 2012-01-26 Keyence Corp Bar code symbol reader, bar code symbol reading method, and computer program
KR101701839B1 (en) * 2010-07-13 2017-02-02 엘지전자 주식회사 Mobile terminal and method for controlling the same
JP2012064173A (en) * 2010-09-17 2012-03-29 Keyence Corp Setting support device for optical information reading device
US8905314B2 (en) * 2010-09-30 2014-12-09 Apple Inc. Barcode recognition using data-driven classifier
KR101205840B1 (en) 2011-04-27 2012-11-28 주식회사 아이브이넷 An apparatus and a method for setting a setting information of a camera using a chart
US9100576B2 (en) * 2011-12-05 2015-08-04 Xerox Corporation Camera positioning tool for symbology reading
WO2013179250A1 (en) * 2012-05-30 2013-12-05 Evertech Properties Limited Article authentication apparatus having a built-in light emitting device and camera
US9807263B2 (en) 2012-10-31 2017-10-31 Conduent Business Services, Llc Mobile document capture assistance using augmented reality
CN104123520B (en) * 2013-04-28 2017-09-29 腾讯科技(深圳)有限公司 Two-dimensional code scanning method and device
US8870074B1 (en) * 2013-09-11 2014-10-28 Hand Held Products, Inc Handheld indicia reader having locking endcap
US9530038B2 (en) * 2013-11-25 2016-12-27 Hand Held Products, Inc. Indicia-reading system
KR102209729B1 (en) * 2014-04-04 2021-01-29 삼성전자주식회사 Apparatas and method for detecting contents of a recognition area in an electronic device
JP5901680B2 (en) * 2014-04-04 2016-04-13 株式会社デジタル Portable terminal device and display device
CN104090761B (en) * 2014-07-10 2017-09-29 福州瑞芯微电子股份有限公司 A kind of sectional drawing application apparatus and method
JP7230358B2 (en) * 2017-07-27 2023-03-01 大日本印刷株式会社 Diffraction optical element, light irradiation device, light irradiation system, projection pattern correction method
DE102018109142A1 (en) * 2018-04-17 2019-10-17 Bundesdruckerei Gmbh Method for verifying a fluorescent-based security feature

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993018478A1 (en) * 1992-03-12 1993-09-16 Norand Corporation Reader for decoding two-dimensional optical information
US5821523A (en) * 1992-03-12 1998-10-13 Bunte; Alan G. Combined code reader and digital camera using a common photodetector
JP2005004642A (en) * 2003-06-13 2005-01-06 V-Sync Co Ltd Camera-equipped cellphone, bar code reading method and program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3169527B2 (en) * 1994-03-16 2001-05-28 旭光学工業株式会社 Data symbol reading device
US5513264A (en) * 1994-04-05 1996-04-30 Metanetics Corporation Visually interactive encoding and decoding of dataforms
US6008837A (en) * 1995-10-05 1999-12-28 Canon Kabushiki Kaisha Camera control apparatus and method
FR2780533B1 (en) * 1998-06-26 2000-09-29 Bsn Sa METHOD AND DEVICE FOR READING RELIEFS CARRIED BY A TRANSPARENT OR TRANSLUCENT CONTAINER
JP2004023167A (en) * 2002-06-12 2004-01-22 Fuji Photo Film Co Ltd Image processing equipment, image output ledger forming equipment, and image processing program
JP4113387B2 (en) * 2002-07-24 2008-07-09 シャープ株式会社 Portable terminal device, information reading program, and recording medium recording the program
JP4296032B2 (en) * 2003-05-13 2009-07-15 富士フイルム株式会社 Image processing device
KR100604011B1 (en) * 2004-01-02 2006-07-24 엘지전자 주식회사 Image processing device and method thereof
TWI261753B (en) * 2004-01-09 2006-09-11 Rvideo Digital Technology Corp DVD playing structure and method for selection of playing multiple captions
JP4655991B2 (en) * 2006-04-21 2011-03-23 カシオ計算機株式会社 Imaging apparatus, electronic zoom method, and program
KR101527037B1 (en) * 2009-06-23 2015-06-16 엘지전자 주식회사 Mobile terminal and method for controlling the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993018478A1 (en) * 1992-03-12 1993-09-16 Norand Corporation Reader for decoding two-dimensional optical information
US5821523A (en) * 1992-03-12 1998-10-13 Bunte; Alan G. Combined code reader and digital camera using a common photodetector
JP2005004642A (en) * 2003-06-13 2005-01-06 V-Sync Co Ltd Camera-equipped cellphone, bar code reading method and program

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008039708A1 (en) * 2006-09-26 2008-04-03 Symbol Technologies, Inc. System and method for an image decoder with feedback
GB2466239A (en) * 2008-12-10 2010-06-23 Ben John Dixon Whitaker A system for adapting the size of a barcode image to suit a display area
WO2013189487A1 (en) * 2012-06-18 2013-12-27 Wipotec Wiege- Und Positioniersysteme Gmbh Monitoring device for an identification, having a detecting and processing arrangement for detecting the identification
US11367177B2 (en) 2012-06-18 2022-06-21 Wipotec Wiege-Und Positioniersysteme Gmbh Checking device for a label, with a detection and processing unit for the detection of the label
GB2533692A (en) * 2014-12-12 2016-06-29 Hand Held Prod Inc Auto-contrast viewfinder for an indicia reader
US9767581B2 (en) 2014-12-12 2017-09-19 Hand Held Products, Inc. Auto-contrast viewfinder for an indicia reader
GB2533692B (en) * 2014-12-12 2019-08-07 Hand Held Prod Inc Auto-contrast viewfinder for an indicia reader
WO2024023495A1 (en) * 2022-07-26 2024-02-01 Quantum Base Limited Method of reading an optically readable security element
WO2024023494A1 (en) * 2022-07-26 2024-02-01 Quantum Base Limited Method of reading an optically readable security element

Also Published As

Publication number Publication date
EP1941421B1 (en) 2011-04-27
US20100038427A1 (en) 2010-02-18
US20070090190A1 (en) 2007-04-26
EP1941421A1 (en) 2008-07-09
DE602006021598D1 (en) 2011-06-09
JP2007122229A (en) 2007-05-17
US8226006B2 (en) 2012-07-24
JP4569441B2 (en) 2010-10-27

Similar Documents

Publication Publication Date Title
EP1941421B1 (en) Code reader
US11696021B2 (en) Video recording device and camera function control program
JP5623915B2 (en) Imaging device
JP5424732B2 (en) Imaging apparatus, control method thereof, and program
JP2007178576A (en) Imaging apparatus and program therefor
JP2010245607A (en) Image recording device and electronic camera
US8687105B2 (en) Image capturing apparatus, image capturing method, and recording medium including a focal length adjustment unit
KR20090045117A (en) Portable device and imaging device
US20040075762A1 (en) Color balance adjustment of image sensed upon emitting flash light
JP6314272B2 (en) Video recording apparatus and video recording method
US7486315B2 (en) Image sensing apparatus and control method therefor
US7071976B2 (en) Image sensing apparatus and control method therefor
JP2005277643A (en) Imaging device
JP4098889B2 (en) Electronic camera and operation control method thereof
US7656435B2 (en) Image processing apparatus and pixel-extraction method therefor
JP5854861B2 (en) Imaging device, control method thereof, and control program
JP5019566B2 (en) Imaging apparatus, control method therefor, computer program, and storage medium
JP2006086926A (en) Information transceiver system and portable terminal
JP2004117775A (en) Camera
KR100827142B1 (en) Apparatus and method for extracting video data in mobile terminal
JP2002314872A (en) Imaging device and imaging method
KR100481526B1 (en) Digital video camera integrated with digital still camera and control method thereof
JP2005208312A (en) Imaging apparatus
KR100947502B1 (en) Apparatus for sensing moving of subject and Method thereof
KR20080109709A (en) Apparatus for sensing moving of subject and method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2006822542

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE