WO2024135789A1 - Camera control device and apparatus equipped with same - Google Patents
Camera control device and apparatus equipped with same Download PDFInfo
- Publication number
- WO2024135789A1 WO2024135789A1 PCT/JP2023/046000 JP2023046000W WO2024135789A1 WO 2024135789 A1 WO2024135789 A1 WO 2024135789A1 JP 2023046000 W JP2023046000 W JP 2023046000W WO 2024135789 A1 WO2024135789 A1 WO 2024135789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- photographed
- control device
- control unit
- marker
- Prior art date
Links
- 230000008859 change Effects 0.000 claims abstract description 13
- 230000004397 blinking Effects 0.000 claims description 54
- 239000003550 marker Substances 0.000 abstract description 95
- 238000010586 diagram Methods 0.000 description 25
- 238000000034 method Methods 0.000 description 25
- 230000008569 process Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 230000011664 signaling Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012731 temporal analysis Methods 0.000 description 3
- 238000000700 time series analysis Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000032258 transport Effects 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000006249 magnetic particle Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present invention relates to a camera control device that controls an event camera (Event Based Camera) and a device equipped with the same.
- an event camera Event Based Camera
- Patent Document 1 describes a robot control system that monitors the robot's work area, and includes an event camera that detects the movement of a moving object in the work area, an imaging camera that captures images of the work area, and a control device connected to the event camera and the imaging camera. This control device is configured to determine whether a person has entered or exited the work area based on the image captured by the imaging camera when the movement of a moving object is detected by the event camera.
- Patent Document 1 in order to determine the state of the object being photographed, a camera that photographs the object is required in addition to the event camera, which creates problems of increased cost and a complex configuration.
- the present invention aims to provide a camera control device that can determine the state of the subject being photographed, and a device equipped with the same.
- the camera control device of the present invention comprises an event camera that acquires a captured image based on changes in the amount of light of the subject being photographed, and a control unit that acquires a pattern image from the captured image by detecting changes in the amount of light caused by the subject being photographed, and the control unit determines the state of the subject being photographed based on the pattern image.
- a camera capable of low power consumption and high speed processing can be used to recognize pattern images from a photographed subject, and the condition of the photographed subject can be determined.
- FIG. 1 is a system configuration diagram including a robot-shaped camera control device 100 according to the present disclosure.
- FIG. 2 is a diagram showing the configuration of a marker 200 having a vocoding function.
- FIG. 2 is a diagram showing a functional configuration of a robot 100a including a camera control device 100.
- 4 is a flowchart showing an operation of the camera control device 100.
- 1 is a diagram illustrating the positional relationship between an event camera 101 and a marker 200 (vocode pattern).
- FIG. FIG. 2 shows a detail of a marker 200 or a vocode pattern 203 visible therein.
- 13 is a flowchart showing a process for recognizing a blinking marker 200.
- 13 is a diagram showing an LED control pattern of the marker 200.
- FIG. 11 is a diagram illustrating a detailed explanation of an LED control pattern.
- 13 is a flowchart showing a modified example of the blinking ID recognition process and the marker attitude calculation process.
- FIG. 1 is a diagram showing an example in which the camera control device 100 is applied to VR goggles. A block diagram showing the functions of the VR goggles 100b.
- FIG. 2 is a diagram showing the configuration of a marker 200a.
- FIG. 2 is a diagram showing a functional configuration of a robot 100a including a camera control device 100c.
- 10 is a flowchart showing the operation of the camera control device 100c.
- FIG. 1 is a diagram illustrating an example of a hardware configuration of a camera control device 100 according to an embodiment of the present disclosure.
- FIG. 1 is a system configuration diagram including a robot-shaped camera control device 100 according to the present disclosure. As shown in the figure, the camera control device 100 can determine the posture (state) of a work object 300 by detecting a marker 200 attached to the work object 300.
- This camera control device 100 is built into the robot 100a.
- the robot 100a has an arm and the like, and grasps and transports the work object 300 according to the detected posture of the work object 300.
- the work object 300 is, for example, a cardboard box in which products are packed.
- Marker 200 is a tiny light source with embedded ID information, and has a vocode function.
- Vocode is a function or device for exchanging information that changes depending on the distance or camera focus position by taking pictures from close to long distances using an image input with a focus adjustment mechanism such as a digital camera.
- vocoded only the illumination of the light source is captured by the camera at long distances, and when the camera is focused at infinity or long distances at close range, the pattern composed in the vocode light source can be recognized by the camera.
- FIG. 2 shows the configuration of a marker 200 with a vocode function.
- the marker 200 is configured by stacking an LED 201, a diffuser 202, a vocode pattern 203, and a lens set 204.
- FIG. 2(b) shows the marker 200.
- Figure 2(c) shows a vocode pattern 203. As shown in the figure, different patterns are written according to the position in a 5x5 matrix. Depending on how this pattern appears, it is possible to determine from which direction the image was taken.
- FIG. 3 is a diagram showing the functional configuration of a robot 100a including a camera control device 100.
- This camera control device 100 includes an event camera 101 and a control unit 102.
- the robot 100a also includes a camera control device 100, an arm control unit 103, an arm motor 104, an arm 105, a wheel control unit 106, a wheel motor 107, and wheels 108.
- the event camera 101 is a camera that detects changes in the amount of light on the subject being photographed and outputs the data.
- the control unit 102 controls the event camera 101 based on the data output from the event camera 101, and also determines the state of the marker 200 (work object 300), such as its posture, based on that data.
- the arm control unit 103 is a part that controls the arm motor 104 and the arm 105 according to the posture of the marker 200 recognized by the camera control device 100.
- the arm motor 104 is a power source that moves the arm 105 up, down, left, and right, and also causes the holding part of the arm 105 to grip and release the marker 200 (or the work object 300 to which it is attached).
- the wheel control unit 106 is a part that controls the wheel motor 107 and the wheels 108 so that the robot heads toward the marker 200 (work object 300) recognized by the camera control device 100.
- the wheel motor 107 is a power source for controlling the direction of travel and steering, and rotates the wheels 108 and controls the steering direction.
- Figure 4 is a flowchart showing the operation of the camera control device 100.
- the camera control device 100 instructs the marker 200 to start a blinking operation.
- the marker 200 and the camera control device 100 are equipped with a short-range wireless communication unit (not shown), and this is performed through short-range wireless communication. Then, when the event camera 101 detects the blinking of the marker 200 and the control unit 102 determines this, it adjusts the focus of the event camera 101 according to the distance to the marker 200 (S101).
- the control unit 102 searches for, recognizes, and tracks images in which the marker 200 blinks (S102).
- the control unit 102 performs a time series analysis of the light source of the marker 200 to recognize a blinking ID that indicates the type of the marker 200 (S103). The process of recognizing this blinking ID will be described later.
- the control unit 102 determines whether the size of the light source of the marker 200 captured by the event camera 101 is equal to or larger than a certain size (S104). If it is equal to or larger than a certain size, the control unit 102 determines that the distance to the marker 200 is short, and adjusts the focus of the event camera 101 to infinity (S105).
- control unit 102 accumulates the event data (image data) output by the event camera 101 in a buffer memory (built into the control unit 102), and then superimposes a certain section of the event data to convert it into time-series frame data and store it (S106).
- the control unit 102 recognizes a vocode pattern from the stored event data (S107).
- the control unit 102 calculates the posture of the marker 200 from the recognized vocode pattern (S108).
- control unit 102 searches for, recognizes, and tracks blinking images in the images acquired for the other markers 200 (S109).
- the control unit 102 performs a time series analysis of the light source of the other markers 200 to recognize the blinking ID indicating the type of the other marker 200 (S110).
- the control unit 102 performs processes S106 to S108 for the other markers 200 recognized here.
- the wheel control unit 106 of the robot 100a While performing the above-mentioned control, the wheel control unit 106 of the robot 100a advances in the direction in which the marker 200 is flashing, and when it is within a predetermined distance, it controls the arm 105 according to the posture of the marker 200 to transport the work object 300.
- Figure 5 explains the positional relationship between the event camera 101 and the marker 200 (vocode pattern).
- the vocode pattern 203 captured varies depending on the position of the event camera 101.
- pattern P15 in the vocode pattern 203 is visible from the event camera 101 at position P1 through the lens set 204.
- vocode pattern P11 is visible from the event camera 101 at position P3.
- the posture of the marker 200 is determined based on the positional relationship between this visible image (position) and the position of the event camera 101.
- Figure 6 shows details of the marker 200 or the vocode pattern 203 visible within it.
- Figure 6(a) shows an image of the marker 200 captured by the event camera 101 when the camera control device 100 (event camera 101) and the marker 200 are located at a long distance (a predetermined distance or greater). As shown in the figure, the light from the LED 201 that constitutes the marker 200 is captured in a round spherical shape, and the vocode pattern 203 is not captured. At this time, the focus of the event camera 101 is adjusted to the marker 200.
- Figure 6(b) shows an image captured when the camera control device 100 approaches the marker 200.
- the focus of the event camera 101 is set to an infinity state when the light from the marker 200 reaches a predetermined magnitude.
- a portion of the vocode pattern 203 is captured.
- Figure 6(c) shows an image captured when the camera control device 100 approaches the marker 200 even closer. As shown in the figure, each pattern of the vocode pattern 203 is clearly captured.
- FIG. 6(d) shows an example of vocode pattern 203.
- vocode pattern 203 is formed in a 5x5 matrix pattern.
- the position of vocode pattern 203 differs from the captured image shown in FIG. 6(c). This is because, as explained in FIG. 5, the vocode pattern captured differs depending on the positional relationship between the camera control device 100 and the marker 200 and the capturing direction. The camera control device 100 determines its relative attitude and position with respect to the marker 200 depending on this captured image.
- vocode pattern 203 is a 5x5 matrix pattern, but of course it is not limited to this and may be a small matrix, or conversely, a pattern with many matrices.
- FIG. 7 is a flowchart showing the process of recognizing a blinking marker 200.
- the event camera 101 captures images according to the blinking of the marker 200. That is, the amount of light changes due to blinking, and the event camera 101 captures this change in the amount of light.
- the marker 200 blinks according to on/off data in which the blinking ID is encoded using Manchester code. This blinking ID indicates the type of marker 200, etc.
- the control unit 102 accumulates the on/off data of the time region where the marker 200 is blinking in a fixed frame length (a predetermined time) in a memory unit (not shown) (S301).
- the control unit 102 searches the accumulated on/off data of the fixed frame length and detects the on/off period of the blinking (S302).
- the control unit 102 determines whether the length of the off period is equal to or longer than a certain period (S303).
- the control unit 102 then decodes the on/off data indicating the Manchester-encoded blinking ID, starting from the rise time of the on period (S304).
- the event camera 101 has the characteristic that a signal is generated only when the brightness of the subject being photographed changes. Therefore, in order for the marker 200 to read 0/1 accurately in accordance with this characteristic, the marker 200 performs Manchester code processing on the blinking ID indicating the ID, and performs a blinking operation based on the encoded information.
- Manchester codes are characterized by the fact that there is always a transition in the middle of each bit period, and depending on the information being transmitted, there is also a transition at the beginning of the period. Therefore, an event always occurs at a period of frequency/2, making it easier for the event camera 101 to recognize the blinking ID.
- FIG. 8 is a diagram showing an LED control pattern based on the Manchester-encoded on/off data of the marker 200.
- This diagram shows a Manchester-encoded control pattern.
- an LED control unit (not shown) provided in the marker 200 repeatedly turns the LED on and off.
- solid lines indicate on transitions, and dashed lines indicate off transitions.
- the marker 200 expresses its own ID by repeatedly turning the LED on and off.
- the blinking ID of the marker 200 indicates the type of work object 300 or indicates the vocode pattern 203.
- the vertical axis indicates the amount of change in luminance
- the horizontal axis indicates time.
- FIG 9 is a diagram that provides a detailed explanation of the bit detection method used by an event camera.
- an event camera can only recognize the occurrence and disappearance (On/Off) of an event, and cannot detect the continuation of an event. In other words, only the On/Off edges of an LED can be detected as an event.
- Manchester encoding is suitable for bit reading by an event camera because any bit sequence of 00, 01, 10, or 11 always results in an On/Off switch.
- bit "0" indicates On ⁇ Off
- bit "1" indicates Off ⁇ On.
- a sleep period is provided to indicate the start and end of the blinking that indicates the blinking ID, and the blinking is off for a specified period of time.
- the camera control device 100 can recognize the blinking ID by analyzing the blinking on/off status over time.
- FIG. 10 is a flowchart showing the operation of the camera control device 100 in this modified example.
- the focus of the event camera 101 is adjusted to infinity from the beginning (S201).
- the event camera 101 and the control unit 102 perform the blinking ID recognition process of the marker 200 and the attitude calculation process of the marker 200 in parallel (S202 to S206).
- the process S202 to process S203 are performed for multiple markers 200.
- the control unit 102 while recognizing the blinking ID (S202, S203), the control unit 102 recognizes the stored vocode pattern and calculates the posture of the marker 200 (S204 to S206). Note that since the recognition of the vocode pattern is performed based on the recognition result of the blinking ID, the initial steps S204 to S206 are performed after the blinking ID is recognized by steps S202 to S203.
- the focus is adjusted to infinity from the beginning, so focus processing can be omitted, simplifying processing.
- FIG. 11 is a diagram showing an example in which the camera control device 100 is applied to VR goggles 100b.
- a user U wears VR goggles 100b equipped with the camera control device 100.
- the VR goggles 100b are goggles that provide a virtual space to the user.
- a user wearing this VR goggles 100b is immersed in the virtual space and cannot see anything in the real space.
- a user wearing the VR goggles 100b can move through a virtual space that imitates the real space while being in the real space.
- a controlled object 300a that exists in the real space is represented in the virtual space, and its positional relationship is represented in the virtual space by recognizing a vocode pattern, etc.
- the real space is an interior room
- the virtual space is an arbitrary space that is different from the real space. It is assumed that home appliances, furniture, etc. that exist in the real space will become obstacles or targets in the virtual space, and their shapes will be represented according to the content of the virtual space.
- Markers 200 are attached to lighting and air conditioners (appliance indicators, etc.) installed in the indoor environment in real space, and the VR goggles 100b (camera control device 100) can recognize the type, posture, etc. of these markers 200.
- the VR goggles 100b can then calculate the positional relationship of the VR goggles user in the room from the vocodes embedded in the markers 200.
- FIG. 12 is a block diagram showing the functions of the VR goggles 100b.
- the VR goggles 100b include a camera control device 100, a virtual space providing unit 111, a communication unit 112, and a display 113.
- the virtual space providing unit 111 is a part that generates a virtual space by image processing, generates an image for the virtual space based on the control object 300a determined by the camera control device 100, and represents the positional relationship and posture with respect to the user within the virtual space.
- the communication unit 112 is the part that acquires information about the virtual space from an external server, etc.
- the display 113 is a part that displays the virtual space and messages provided by the virtual space providing unit 111 to the user.
- the VR goggles 100b perform the following operations: When in a distant location, the VR goggles 100b (camera control device 100) identifies the blinking ID of the blinking markers 200 (light sources) attached to multiple control objects.
- the VR goggles 100b generate a virtual space on the VR app that includes the controlled object 300a based on the blinking ID and represent the user's walking.
- the control unit 102 determines the type of controlled object according to the blinking ID, and the virtual space providing unit 111 generates an image according to the controlled object 300a (television, air conditioner, lighting). Since the user is in the virtual space, an image is generated with the angle of the controlled object 300a adjusted.
- the VR goggles 100b calculate the positional relationship between the control object 300a and the VR goggles 100b, along with the blinking ID of the approaching control object.
- the VR goggles 100b then generate a virtual space on the VR app based on the calculated positional relationship, and also generate an approach warning message for the control object 300b as necessary.
- the VR goggles 100b can also be used to detect the user's movement and obstacles in the real world.
- the controlled objects 300a are lighting, air conditioners, etc., but this is not limited to these. They may be attached to chairs, tables, etc., and other objects such as potted plants may also be detectable as obstacles.
- markers 200 vocodes
- Fig. 13 is a diagram showing another example of the marker 200a. As shown in Fig. 13(a), one pattern image is drawn on the marker 200a. This marker 200a blinks in the same way as the marker 200.
- Fig. 13(b) is a schematic diagram showing the configuration of the marker 200a. As shown in the figure, the marker 200a is configured by overlapping an LED 201a and a pattern image 203a (transparent plate). The LED 201a blinks under the control of a control unit (not shown). As mentioned above, this blinking operation is preferably based on Manchester-encoded on/off data, but is not limited thereto.
- FIG. 14 shows a robot 100a equipped with a camera control device 100c of the present disclosure.
- the functional configuration is the same as that of FIG. 3, but the control operation in the control unit 102a is different. That is, the control unit 102a determines the inclination of the shape and pattern of the marker 200a photographed by the event camera 101, and can determine the posture of the marker 200a (work object 300, etc.) accordingly. In addition, the control unit 102a always appropriately adjusts the focus of the event camera 101 according to the distance to the marker 200a.
- FIG. 15 is a flowchart showing the operation of the camera control device 100c.
- the event camera 101 adjusts the focus according to the distance (S401), and the control unit 102a searches for, recognizes, and tracks the blinking light source (S402).
- the control unit 102a performs a time series analysis of the light source and recognizes the blinking ID (S403).
- the control unit 102a accumulates the event data acquired by the event camera 101 in time series (S404).
- the control unit 102a recognizes a pattern from the accumulated event data, and calculates the attitude of the marker 200a from the pattern (S406). In this example, the control unit 102a can calculate the attitude of the work object 300 from the shape of the marker 200a and the inclination of the pattern, etc.
- the camera control device 100 of the present disclosure provides the following operational effects. Note that in the following explanation, the work object 300 is used as an example, but it can also be applied to the control object 300a.
- the event camera 101 acquires a captured image based on the change in the amount of light (blinking) of the marker 200, which is the object of capture.
- the control unit 102 acquires a pattern image by detecting the change in the amount of light (blinking) caused by the marker 200 from the captured image.
- the control unit 102 determines the positional relationship with the marker 200.
- a camera capable of low power consumption and high speed processing such as the event camera 101, can be used to recognize pattern images, and their positional relationship can be determined from the degree of inclination of the pattern images, etc.
- the control unit 102 may also acquire a pattern image (vocode pattern 203) from the marker 200 that becomes recognizable when the focus of the event camera 101 is set to a long distance or infinity for a marker 200 that is within a predetermined distance.
- a pattern image (vocode pattern 203) from the marker 200 that becomes recognizable when the focus of the event camera 101 is set to a long distance or infinity for a marker 200 that is within a predetermined distance.
- a camera with low power consumption and high speed processing such as the event camera 101 can be used to recognize pattern images such as vocodes and determine their positional relationship.
- the event camera 101 cannot capture an object if the object is not moving, but it can capture an image by blinking.
- the marker 200 is a vocode having a light source that blinks in a predetermined pattern.
- the event camera 101 acquires a vocode pattern 203 (pattern image) contained in the vocode.
- the event camera 101 and the vocoded marker 200 can be used to recognize the work object 300 and its state (posture) at high speed and with low power consumption.
- control unit 102 sets the focus of the event camera 101 to infinity or a long distance when the distance to the marker 200 (photographed object) is equal to or less than a predetermined value.
- the control unit 102 determines the state (posture, etc.) of the marker 200 (photographed object) based on the pattern image.
- the marker 200 i.e., the work object 300
- a vocode pattern such as a vocode can be identified by focusing the event camera 101 at infinity, etc.
- the posture of the marker 200 can be determined.
- control unit 102 determines the state of the marker 200 or marker 200a based on a pattern image.
- “Based on a pattern image” refers to a recognized vocode pattern for marker 200a, and based on the tilt of the pattern image for marker 200b, and the state (posture) of the marker 200, etc. can be determined from the pattern image.
- the control unit 102 also identifies the work object 300 based on the blinking ID indicated by the change in the light intensity (blinking) of the marker 200. Then, the control unit 102 determines the state of the work object 300 according to the identified work object.
- the control unit 102 can determine the type of work object 300 to which the marker 200 is attached based on the blinking ID of the marker 200. Therefore, the state of the work object 300, for example, its posture, can be determined based on the type.
- the camera control device 100 further includes a storage unit that stores the captured images acquired by the event camera 101 in chronological order.
- the control unit 102 determines the blinking pattern based on the captured images, and determines the work object 300 (blinking ID) based on the blinking pattern.
- an event camera 101 can recognize changes in the amount of light, but cannot tell the time.
- the characteristics of the event camera 101 are utilized to recognize the blinking pattern, making it possible to recognize the work object 300.
- the robot 100a includes a camera control device 100.
- the robot 100a includes an arm 105 for holding a work object 300, and an arm control unit 103 for operating the arm 105 according to the posture of the work object 300 determined by the camera control device 100.
- This configuration makes it possible to perform advanced tasks using the camera control device 100.
- the VR goggles 100b include a camera control device 100.
- the virtual space providing unit 111 places a virtual object based on the controlled object 300a and its posture in a virtual space, and provides the virtual space to the user.
- the camera control device 100 can be used to represent the posture, etc., of a virtual object in virtual space in accordance with an object in real space.
- an event camera that captures images based on changes in the amount of light of an object to be photographed; a control unit that acquires a pattern image by detecting a change in light amount caused by the object to be photographed from the photographed image; Equipped with The control unit determines a positional relationship with the object to be photographed based on the pattern image.
- Camera control device
- the control unit is detecting a change in light amount caused by the object from the captured image, and acquiring a pattern image from the object that is within a predetermined distance and that is recognizable when the focus of the event camera is set to a long distance or infinity;
- the camera control device according to [1].
- the subject includes a vocode having a light source that blinks in a predetermined pattern;
- the control unit acquires the pattern image included in the vocode.
- the camera control device according to [2].
- the control unit sets a focus of the event camera to infinity or a long distance when a distance to the object to be photographed is equal to or less than a predetermined value.
- the camera control device according to [2] or [3].
- the control unit is determining a state of the object to be photographed based on the pattern image;
- the camera control device according to any one of [1] to [4].
- the control unit is Identifying the object to be photographed based on ID information indicated by the change in the amount of light; determining a state of the photographed object according to the identified photographed object; The camera control device according to [5].
- a storage unit that stores the images captured by the event camera in chronological order, The control unit determines a blinking pattern based on the captured image, and determines the object to be photographed based on the blinking pattern.
- the camera control device according to [6].
- An apparatus including the camera control device according to any one of [1] to [7], an arm for holding the photographing object; an arm control unit that operates the arm based on a positional relationship with the object to be photographed determined by the camera control device;
- An apparatus comprising:
- a VR goggle-type device comprising the camera control device according to any one of [1] to [7], a virtual space providing unit that arranges the subject to be photographed and a virtual object based on the positional relationship in a virtual space and provides the virtual space to a user;
- An apparatus comprising:
- each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and connected directly or indirectly (e.g., using wires, wirelessly, etc.) and these multiple devices.
- the functional blocks may be realized by combining the one device or the multiple devices with software.
- Functions include, but are not limited to, judgement, determination, judgment, calculation, computation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, resolution, selection, election, establishment, comparison, assumption, expectation, regard, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assignment.
- a functional block (component) that performs the transmission function is called a transmitting unit or transmitter.
- the camera control device 100 in one embodiment of the present disclosure may function as a computer that performs processing of the camera control method of the present disclosure.
- FIG. 16 is a diagram showing an example of the hardware configuration of the camera control devices 100 and 100c in one embodiment of the present disclosure.
- the camera control device 100 described above may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, communication device 1004, input device 1005, output device 1006, bus 1007, and the like.
- the camera control device 100 includes the camera control device 100c unless otherwise specified.
- the word “apparatus” can be interpreted as a circuit, device, unit, etc.
- the hardware configuration of the camera control device 100 may be configured to include one or more of the devices shown in the figure, or may be configured to exclude some of the devices.
- Each function of the camera control device 100 is realized by loading specific software (programs) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations, control communications via the communication device 1004, and control at least one of the reading and writing of data in the memory 1002 and storage 1003.
- the processor 1001 for example, runs an operating system to control the entire computer.
- the processor 1001 may be configured as a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic unit, registers, etc.
- the above-mentioned control unit 102 may be realized by the processor 1001.
- the processor 1001 also reads out programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these.
- the programs used are those that cause the computer to execute at least some of the operations described in the above-mentioned embodiments.
- the control unit 102 may be realized by a control program stored in the memory 1002 and running on the processor 1001, and similarly may be realized for other functional blocks.
- the above-mentioned various processes have been described as being executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001.
- the processor 1001 may be implemented by one or more chips.
- the programs may be transmitted from a network via a telecommunications line.
- Memory 1002 is a computer-readable recording medium, and may be composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. Memory 1002 may also be called a register, cache, main memory (primary storage device), etc. Memory 1002 can store executable programs (program codes), software modules, etc. for implementing a camera control method according to one embodiment of the present disclosure.
- ROM Read Only Memory
- EPROM Erasable Programmable ROM
- EEPROM Electrical Erasable Programmable ROM
- RAM Random Access Memory
- Memory 1002 may also be called a register, cache, main memory (primary storage device), etc.
- Memory 1002 can store executable programs (program codes), software modules, etc. for implementing a camera control method according to one embodiment of the present disclosure.
- Storage 1003 is a computer-readable recording medium, and may be, for example, at least one of an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, a flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc.
- Storage 1003 may also be referred to as an auxiliary storage device.
- the above-mentioned storage medium may be, for example, a database, a server, or other suitable medium including at least one of memory 1002 and storage 1003.
- the communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, etc.
- the communication device 1004 may be configured to include a high-frequency switch, a duplexer, a filter, a frequency synthesizer, etc., to realize, for example, at least one of Frequency Division Duplex (FDD) and Time Division Duplex (TDD).
- FDD Frequency Division Duplex
- TDD Time Division Duplex
- the input device 1005 is an input device (e.g., a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts input from the outside.
- the output device 1006 is an output device (e.g., a display, a speaker, an LED lamp, etc.) that performs output to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one structure (e.g., a touch panel).
- each device such as the processor 1001 and memory 1002 is connected by a bus 1007 for communicating information.
- the bus 1007 may be configured using a single bus, or may be configured using different buses between each device.
- the communication device 1004 and the input device 1005 are not essential to the camera control device 100 of this disclosure.
- the camera control device 100 may also be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), or a field programmable gate array (FPGA), and some or all of the functional blocks may be realized by the hardware.
- the processor 1001 may be implemented using at least one of these pieces of hardware.
- the notification of information is not limited to the aspects/embodiments described in this disclosure, and may be performed using other methods.
- the notification of information may be performed by physical layer signaling (e.g., DCI (Downlink Control Information), UCI (Uplink Control Information)), higher layer signaling (e.g., RRC (Radio Resource Control) signaling, MAC (Medium Access Control) signaling, broadcast information (MIB (Master Information Block), SIB (System Information Block)), other signals, or a combination of these.
- RRC signaling may be referred to as an RRC message, and may be, for example, an RRC Connection Setup message, an RRC Connection Reconfiguration message, etc.
- the input and output information may be stored in a specific location (e.g., memory) or may be managed using a management table.
- the input and output information may be overwritten, updated, or added to.
- the output information may be deleted.
- the input information may be sent to another device.
- the determination may be based on a value represented by one bit (0 or 1), a Boolean value (true or false), or a numerical comparison (e.g., with a predetermined value).
- notification of specific information is not limited to being done explicitly, but may be done implicitly (e.g., not notifying the specific information).
- Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- Software, instructions, information, etc. may also be transmitted and received via a transmission medium.
- a transmission medium For example, if the software is transmitted from a website, server, or other remote source using at least one of wired technologies (such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave, etc.), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.
- wired technologies such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)
- wireless technologies such as infrared, microwave, etc.
- the information, signals, etc. described in this disclosure may be represented using any of a variety of different technologies.
- the data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.
- At least one of the channel and the symbol may be a signal (signaling).
- the signal may be a message.
- a component carrier (CC) may be called a carrier frequency, a cell, a frequency carrier, etc.
- a radio resource may be indicated by an index.
- the names used for the parameters described above are not intended to be limiting in any way. Furthermore, the formulas etc. using these parameters may differ from those explicitly disclosed in this disclosure.
- the various channels (e.g., PUCCH, PDCCH, etc.) and information elements may be identified by any suitable names, and the various names assigned to these various channels and information elements are not intended to be limiting in any way.
- MS Mobile Station
- UE User Equipment
- a mobile station may also be referred to by those skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology.
- determining may encompass a wide variety of actions.
- Determining and “determining” may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining as “judging” or “determining.”
- determining and “determining” may include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and considering ascertaining as “judging” or “determining.”
- judgment” and “decision” can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been “judged” or “decided.” In other words, “judgment” and “decision” can include considering some action to have been “judged” or “decided.” Additionally, “judgment (decision)” can be interpreted as “assuming,” “ex
- connection refers to any direct or indirect connection or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are “connected” or “coupled” to one another.
- the coupling or connection between elements may be physical, logical, or a combination thereof.
- “connected” may be read as "access.”
- two elements may be considered to be “connected” or “coupled” to one another using at least one of one or more wires, cables, and printed electrical connections, as well as electromagnetic energy having wavelengths in the radio frequency range, microwave range, and optical (both visible and invisible) range, as some non-limiting and non-exhaustive examples.
- the phrase “based on” does not mean “based only on,” unless expressly stated otherwise. In other words, the phrase “based on” means both “based only on” and “based at least on.”
- any reference to an element using a designation such as "first,” “second,” etc., used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed or that the first element must precede the second element in some way.
- a and B are different may mean “A and B are different from each other.”
- the term may also mean “A and B are each different from C.”
- Terms such as “separate” and “combined” may also be interpreted in the same way as “different.”
- 100...camera control device 300...work object, 200...marker, 201...LED, 202...diffuser, 203...vocode pattern, 204...lens set, 100a...robot, 101...event camera, 102...control unit, 103...arm control unit, 104...arm motor, 105...arm, 106...wheel control unit, 107...wheel motor, 108...wheel.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Studio Devices (AREA)
Abstract
The present invention determines an object to be imaged. In a camera control device 100 according to the present disclosure, an event camera 101 acquires a captured image on the basis of the light quantity change (flashing) of a marker 200 which is an object to be imaged. A control unit 102 detects the light quantity change (flashing) of the marker 200 from the captured image, and acquires a pattern image (vocode pattern 203) which becomes recognizable in a state where the focus of the event camera 101 is set to a long distance or to an infinite distance with respect to the marker 200 located within a prescribed distance. Then, the control unit 102 determines the positional relationship with the marker 200.
Description
本発明は、イベントカメラ(Event Based Camera)を制御するカメラ制御装置およびこれを備えた装置に関する。
The present invention relates to a camera control device that controls an event camera (Event Based Camera) and a device equipped with the same.
特許文献1には、ロボット制御システムは、ロボットの作業領域を監視するものであり、作業領域での移動体の移動を検出するイベントカメラと、作業領域を撮像する撮像カメラと、イベントカメラおよび撮像カメラと接続された制御装置とを備えることが記載されている。この制御装置は、イベントカメラにより移動体の移動が検出された場合に、撮像カメラによる撮像結果に基づいて作業領域に対する人の入出を判断するように構成されている。
Patent Document 1 describes a robot control system that monitors the robot's work area, and includes an event camera that detects the movement of a moving object in the work area, an imaging camera that captures images of the work area, and a control device connected to the event camera and the imaging camera. This control device is configured to determine whether a person has entered or exited the work area based on the image captured by the imaging camera when the movement of a moving object is detected by the event camera.
特許文献1においては、撮影対象物の状態を判断するために、イベントカメラの他に撮影対象物を撮影する撮影カメラが必要であり、コストおよびその構成が複雑になるという問題がある。
In Patent Document 1, in order to determine the state of the object being photographed, a camera that photographs the object is required in addition to the event camera, which creates problems of increased cost and a complex configuration.
そこで、上述の課題を解決するために、本発明においては、撮影対象の状態を判断することができるカメラ制御装置およびこれを備えた装置を提供することを目的とする。
In order to solve the above problems, the present invention aims to provide a camera control device that can determine the state of the subject being photographed, and a device equipped with the same.
本発明のカメラ制御装置は、撮影対象物の光量変化に基づいて撮影画像を取得するイベントカメラと、前記撮影画像から前記撮影対象物による光量変化を検出することによりパターン画像を取得する制御部と、を備え、前記制御部は、前記パターン画像に基づいて前記撮影対象物の状態を判断する。
The camera control device of the present invention comprises an event camera that acquires a captured image based on changes in the amount of light of the subject being photographed, and a control unit that acquires a pattern image from the captured image by detecting changes in the amount of light caused by the subject being photographed, and the control unit determines the state of the subject being photographed based on the pattern image.
本発明によると、低消費電力、高速処理が可能なカメラを用いて、撮影対象からパターン画像を認識することができ、その撮影対象の状態を判断することができる。
In accordance with the present invention, a camera capable of low power consumption and high speed processing can be used to recognize pattern images from a photographed subject, and the condition of the photographed subject can be determined.
添付図面を参照しながら本開示の実施形態を説明する。可能な場合には、同一の部分には同一の符号を付して、重複する説明を省略する。
The embodiments of the present disclosure will be described with reference to the attached drawings. Where possible, identical parts will be designated by the same reference numerals, and duplicate explanations will be omitted.
図1は、本開示におけるロボットの形をしたカメラ制御装置100を含むシステム構成図である。図に示されるとおり、カメラ制御装置100は、作業対象物300に備え付けられているマーカ200を検出することで、その作業対象物300の姿勢等(状態)を判断することができる。
FIG. 1 is a system configuration diagram including a robot-shaped camera control device 100 according to the present disclosure. As shown in the figure, the camera control device 100 can determine the posture (state) of a work object 300 by detecting a marker 200 attached to the work object 300.
このカメラ制御装置100は、ロボット100aに内蔵されている。ロボット100aは、アーム等を有しており、検出した作業対象物300の姿勢に応じて掴み、搬送する。作業対象物300は、例えば商品が梱包されている段ボール箱である。
This camera control device 100 is built into the robot 100a. The robot 100a has an arm and the like, and grasps and transports the work object 300 according to the detected posture of the work object 300. The work object 300 is, for example, a cardboard box in which products are packed.
マーカ200は、ID情報を埋め込んだ微小な光源であり、ボコード(Bokode)の機能を有している。ボコードとは、デジタルカメラなど焦点調節機構を備えた画像入力を用いた近距離から遠距離の撮影によってその距離またはカメラ焦点位置によって変化する情報をやりとりする機能またはそのためのデバイスである。ボコードは、遠距離では、その光源の点灯のみがカメラにより捉えられ、近距離時に、カメラの焦点を無限大または遠距離に合わせたときに、そのカメラにより、そのボコードの光源の中に構成されているパターンが認識可能に構成されたものである。
Marker 200 is a tiny light source with embedded ID information, and has a vocode function. Vocode is a function or device for exchanging information that changes depending on the distance or camera focus position by taking pictures from close to long distances using an image input with a focus adjustment mechanism such as a digital camera. When vocoded, only the illumination of the light source is captured by the camera at long distances, and when the camera is focused at infinity or long distances at close range, the pattern composed in the vocode light source can be recognized by the camera.
図2は、ボコード機能を有するマーカ200の構成を示す図である。図2(a)に示されるとおり、マーカ200は、LED201、デヒューザー202、ボコードパターン203、レンズセット204を重ねて構成されている。図2(b)は、マーカ200を示している。
FIG. 2 shows the configuration of a marker 200 with a vocode function. As shown in FIG. 2(a), the marker 200 is configured by stacking an LED 201, a diffuser 202, a vocode pattern 203, and a lens set 204. FIG. 2(b) shows the marker 200.
図2(c)は、ボコードパターン203を示す図である。図に示されるように、5x5のマトリクス状の位置に応じて異なる模様が記述されている。この模様の見え方により、どの方角から撮影されたのか、判断することができる。
Figure 2(c) shows a vocode pattern 203. As shown in the figure, different patterns are written according to the position in a 5x5 matrix. Depending on how this pattern appears, it is possible to determine from which direction the image was taken.
図3は、カメラ制御装置100を含むロボット100aの機能構成を示す図である。このカメラ制御装置100は、イベントカメラ101および制御部102を含んで構成されている。また、ロボット100aは、カメラ制御装置100、アーム用制御部103、アーム用モータ104、アーム105、車輪用制御部106、車輪用モータ107、および車輪108を含んで構成されている。
FIG. 3 is a diagram showing the functional configuration of a robot 100a including a camera control device 100. This camera control device 100 includes an event camera 101 and a control unit 102. The robot 100a also includes a camera control device 100, an arm control unit 103, an arm motor 104, an arm 105, a wheel control unit 106, a wheel motor 107, and wheels 108.
イベントカメラ101は、撮影対象物の光量の変化を検知してデータ出力するカメラである。
The event camera 101 is a camera that detects changes in the amount of light on the subject being photographed and outputs the data.
制御部102は、イベントカメラ101から出力されたデータに基づいて、イベントカメラ101を制御し、またそのデータに基づいて、マーカ200(作業対象物300)の姿勢等の状態を判断する部分である。
The control unit 102 controls the event camera 101 based on the data output from the event camera 101, and also determines the state of the marker 200 (work object 300), such as its posture, based on that data.
アーム用制御部103は、カメラ制御装置100において認識したマーカ200の姿勢に応じてアーム用モータ104およびアーム105を制御する部分である。アーム用モータ104は、アーム105を上下左右に動かし、またアーム105にある保持部分においてマーカ200(またはそれが添付されている作業対象物300)を掴んだり放したりする動作させるための動力源である。
The arm control unit 103 is a part that controls the arm motor 104 and the arm 105 according to the posture of the marker 200 recognized by the camera control device 100. The arm motor 104 is a power source that moves the arm 105 up, down, left, and right, and also causes the holding part of the arm 105 to grip and release the marker 200 (or the work object 300 to which it is attached).
車輪用制御部106は、カメラ制御装置100において認識したマーカ200(作業対象物300)の方向に向かうよう、車輪用モータ107および車輪108を制御する部分である。車輪用モータ107は、進行および操舵方向を制御するための動力源であって、車輪108を回転させ、また操舵方向を制御するものである。
The wheel control unit 106 is a part that controls the wheel motor 107 and the wheels 108 so that the robot heads toward the marker 200 (work object 300) recognized by the camera control device 100. The wheel motor 107 is a power source for controlling the direction of travel and steering, and rotates the wheels 108 and controls the steering direction.
つぎに、カメラ制御装置100の動作について説明する。図4は、カメラ制御装置100の動作を示すフローチャートである。
Next, the operation of the camera control device 100 will be described. Figure 4 is a flowchart showing the operation of the camera control device 100.
カメラ制御装置100は、その動作に先だって、マーカ200に対して、点滅動作の開始指示を行う。マーカ200およびカメラ制御装置100は、図示しない近距離無線通信部を備えており、その近距離無線通信により行われる。そして、イベントカメラ101がマーカ200の点滅を検出し、制御部102は、それを判断すると、イベントカメラ101のフォーカスをマーカ200までの距離に応じて調節する(S101)。
Prior to this operation, the camera control device 100 instructs the marker 200 to start a blinking operation. The marker 200 and the camera control device 100 are equipped with a short-range wireless communication unit (not shown), and this is performed through short-range wireless communication. Then, when the event camera 101 detects the blinking of the marker 200 and the control unit 102 determines this, it adjusts the focus of the event camera 101 according to the distance to the marker 200 (S101).
制御部102は、マーカ200が点滅した画像を検索、認識、およびトラッキングを行う(S102)。制御部102は、マーカ200の光源を時系列解析して、マーカ200の種別を示す点滅IDを認識する(S103)。この点滅IDの認識処理については後述する。
The control unit 102 searches for, recognizes, and tracks images in which the marker 200 blinks (S102). The control unit 102 performs a time series analysis of the light source of the marker 200 to recognize a blinking ID that indicates the type of the marker 200 (S103). The process of recognizing this blinking ID will be described later.
制御部102は、イベントカメラ101が撮影したマーカ200の光源の大きさが一定以上であるか否かを判断する(S104)。一定以上であれば、制御部102は、マーカ200までの距離が近いと判断して、イベントカメラ101のフォーカスを無限遠に調節する(S105)。
The control unit 102 determines whether the size of the light source of the marker 200 captured by the event camera 101 is equal to or larger than a certain size (S104). If it is equal to or larger than a certain size, the control unit 102 determines that the distance to the marker 200 is short, and adjusts the focus of the event camera 101 to infinity (S105).
そして、制御部102は、イベントカメラ101が出力したイベントデータ(画像データ)をバッファメモリ(制御部102に内蔵)に蓄積した上で一定区間のイベントデータを重畳して時系列のフレームデータに変換して記憶する(S106)。制御部102は、記憶したイベントデータから、ボコードパターンを認識する(S107)。制御部102は、認識したボコードパターンから、マーカ200の姿勢を算出する(S108)。
Then, the control unit 102 accumulates the event data (image data) output by the event camera 101 in a buffer memory (built into the control unit 102), and then superimposes a certain section of the event data to convert it into time-series frame data and store it (S106). The control unit 102 recognizes a vocode pattern from the stored event data (S107). The control unit 102 calculates the posture of the marker 200 from the recognized vocode pattern (S108).
また、制御部102は、上記処理と並行して、他のマーカ200に対して取得した画像において、点滅した画像を検索、認識、およびトラッキングを行う(S109)。制御部102は、他のマーカ200の光源を時系列解析して、他のマーカ200の種別を示す点滅IDを認識する(S110)。制御部102は、ここで認識した他のマーカ200に対して、処理S106から処理S108の処理を行う。
In parallel with the above process, the control unit 102 searches for, recognizes, and tracks blinking images in the images acquired for the other markers 200 (S109). The control unit 102 performs a time series analysis of the light source of the other markers 200 to recognize the blinking ID indicating the type of the other marker 200 (S110). The control unit 102 performs processes S106 to S108 for the other markers 200 recognized here.
上記のような制御をしつつ、ロボット100aは、車輪用制御部106は、マーカ200が点滅している方向に進むとともに、所定距離以内になった場合に、そのマーカ200の姿勢に応じてアーム105を制御して、作業対象物300の搬送作業を行う。
While performing the above-mentioned control, the wheel control unit 106 of the robot 100a advances in the direction in which the marker 200 is flashing, and when it is within a predetermined distance, it controls the arm 105 according to the posture of the marker 200 to transport the work object 300.
図5に、イベントカメラ101と、マーカ200(ボコードパターン)との位置関係について説明する。図に示されるとおり、イベントカメラ101の位置に応じて、撮影されるボコードパターン203は、異なる。例えば、位置P1のイベントカメラ101からは、レンズセット204を介して、ボコードパターン203におけるパターンP15が見える。また、位置P3のイベントカメラ101からは、ボコードパターンP11が見える。この見える画像(位置)と、イベントカメラ101の位置との位置関係に基づいて、マーカ200の姿勢が判断される。
Figure 5 explains the positional relationship between the event camera 101 and the marker 200 (vocode pattern). As shown in the figure, the vocode pattern 203 captured varies depending on the position of the event camera 101. For example, pattern P15 in the vocode pattern 203 is visible from the event camera 101 at position P1 through the lens set 204. Also, vocode pattern P11 is visible from the event camera 101 at position P3. The posture of the marker 200 is determined based on the positional relationship between this visible image (position) and the position of the event camera 101.
図6は、マーカ200またはその中に見えるボコードパターン203の詳細を示す図である。図6(a)は、カメラ制御装置100(イベントカメラ101)とマーカ200との位置が遠距離である(所定距離以上)ときのイベントカメラ101によるマーカ200の撮影画像を示す。図に示されているとおり、マーカ200を構成するLED201からの光は丸い球状に撮影され、ボコードパターン203は撮影されない。このとき、イベントカメラ101は、マーカ200にフォーカスが調整されている。
Figure 6 shows details of the marker 200 or the vocode pattern 203 visible within it. Figure 6(a) shows an image of the marker 200 captured by the event camera 101 when the camera control device 100 (event camera 101) and the marker 200 are located at a long distance (a predetermined distance or greater). As shown in the figure, the light from the LED 201 that constitutes the marker 200 is captured in a round spherical shape, and the vocode pattern 203 is not captured. At this time, the focus of the event camera 101 is adjusted to the marker 200.
図6(b)は、カメラ制御装置100がマーカ200に近づいたときの撮影画像を示す。この図においては、マーカ200の光が所定の大きさになったときにイベントカメラ101のフォーカスが無限遠状態に設定されている。図に示されるとおり、ボコードパターン203の一部が撮影されている。
Figure 6(b) shows an image captured when the camera control device 100 approaches the marker 200. In this figure, the focus of the event camera 101 is set to an infinity state when the light from the marker 200 reaches a predetermined magnitude. As shown in the figure, a portion of the vocode pattern 203 is captured.
図6(c)は、さらにカメラ制御装置100がマーカ200に近づいたときの撮影画像を示す。図に示されるとおり、ボコードパターン203のそれぞれのパターンが明確に撮影されている。
Figure 6(c) shows an image captured when the camera control device 100 approaches the marker 200 even closer. As shown in the figure, each pattern of the vocode pattern 203 is clearly captured.
図6(d)は、ボコードパターン203の一例を示す。図に示されるとおり、ボコードパターン203は、5x5のマトリクス状にパターンが形成されている。図5に示されるとおり、図6(c)に示された撮影画像とボコードパターン203の位置が異なっている。これは、図5において説明したとおり、カメラ制御装置100とマーカ200との位置関係および撮影方向に応じて撮影されるボコードパターンが異なるためである。カメラ制御装置100は、この撮影画像に応じて、マーカ200に対する相対的な姿勢および位置を判断する。本開示においては、ボコードパターン203は、5x5のマトリクス状のパターンとしているが、当然にこれに限るものではなく、小さいマトリクスでもよいし、逆に多くのマトリクスのパターンとしてもよい。
FIG. 6(d) shows an example of vocode pattern 203. As shown in the figure, vocode pattern 203 is formed in a 5x5 matrix pattern. As shown in FIG. 5, the position of vocode pattern 203 differs from the captured image shown in FIG. 6(c). This is because, as explained in FIG. 5, the vocode pattern captured differs depending on the positional relationship between the camera control device 100 and the marker 200 and the capturing direction. The camera control device 100 determines its relative attitude and position with respect to the marker 200 depending on this captured image. In this disclosure, vocode pattern 203 is a 5x5 matrix pattern, but of course it is not limited to this and may be a small matrix, or conversely, a pattern with many matrices.
つぎに、点滅しているマーカ200(点滅ID)を認識する処理について説明する。図7は、点滅しているマーカ200の認識処理を示すフローチャートである。図に示されるとおり、イベントカメラ101は、マーカ200の点滅にしたがって撮影画像を得る。すなわち、点滅によって光量が変わるため、その光量変化をイベントカメラ101は撮影する。本開示においては、マーカ200は、点滅IDをマンチェスタ符号によって符号化されたオン/オフデータに従って点滅動作をしている。この点滅IDは、マーカ200の種別等を示す。
Next, the process of recognizing a blinking marker 200 (blinking ID) will be described. FIG. 7 is a flowchart showing the process of recognizing a blinking marker 200. As shown in the figure, the event camera 101 captures images according to the blinking of the marker 200. That is, the amount of light changes due to blinking, and the event camera 101 captures this change in the amount of light. In this disclosure, the marker 200 blinks according to on/off data in which the blinking ID is encoded using Manchester code. This blinking ID indicates the type of marker 200, etc.
そして、制御部102は、マーカ200の点滅している時間領域のオン/オフデータを、固定フレーム長(予め定めた時間)、記憶部(図示せず)に蓄積する(S301)。制御部102は、蓄積した固定フレーム長のオン/オフデータを検索し、点滅のオン/オフ区間を検出する(S302)。制御部102は、オフの長さが一定時間以上であるかを判断する(S303)。そして、制御部102は、オンの立ち上がり時間を起点として、マンチェスタ符号化された点滅IDを示すオン/オフデータを復号する(S304)。
Then, the control unit 102 accumulates the on/off data of the time region where the marker 200 is blinking in a fixed frame length (a predetermined time) in a memory unit (not shown) (S301). The control unit 102 searches the accumulated on/off data of the fixed frame length and detects the on/off period of the blinking (S302). The control unit 102 determines whether the length of the off period is equal to or longer than a certain period (S303). The control unit 102 then decodes the on/off data indicating the Manchester-encoded blinking ID, starting from the rise time of the on period (S304).
イベントカメラ101は、撮影対象の輝度変化時にしか信号が発生しないという特性を有する。そのため、マーカ200は、その特性にあわせて正確に0/1を読み取らせるため、マーカ200は、そのIDを示す点滅IDに対して、マンチェスタ符号処理を行って、その符号化された情報に基づいて点滅動作を行う。なお、マンチェスタ符号は、各ビット周期の中間に必ず遷移があり、送信される情報によっては周期先頭にも遷移があるという特徴がある。そのため、必ず周波数/2の周期でイベントが発生することになり、イベントカメラ101は、その点滅IDを認識しやすくなる。
The event camera 101 has the characteristic that a signal is generated only when the brightness of the subject being photographed changes. Therefore, in order for the marker 200 to read 0/1 accurately in accordance with this characteristic, the marker 200 performs Manchester code processing on the blinking ID indicating the ID, and performs a blinking operation based on the encoded information. Note that Manchester codes are characterized by the fact that there is always a transition in the middle of each bit period, and depending on the information being transmitted, there is also a transition at the beginning of the period. Therefore, an event always occurs at a period of frequency/2, making it easier for the event camera 101 to recognize the blinking ID.
図8は、マーカ200のマンチェスタ符号されたオン/オフデータに基づいたLED制御パターンを示す図である。この図は、マンチェスタ符号された制御パターンである。図に示されるとおり、マーカ200に備えられているLED制御部(図示せず)によるLEDのオン遷移/オフ遷移が繰り返し行われる。図では、実線がオン遷移、破線がオフ遷移を示す。マーカ200は、LEDのオンオフを繰り返すことにより、マーカ200のIDを表現している。マーカ200の点滅IDは、作業対象物300の種類を示したり、ボコードパターン203を示す。なお、図8において、縦軸が輝度の変化量を示し、横軸が時間を示す。
FIG. 8 is a diagram showing an LED control pattern based on the Manchester-encoded on/off data of the marker 200. This diagram shows a Manchester-encoded control pattern. As shown in the diagram, an LED control unit (not shown) provided in the marker 200 repeatedly turns the LED on and off. In the diagram, solid lines indicate on transitions, and dashed lines indicate off transitions. The marker 200 expresses its own ID by repeatedly turning the LED on and off. The blinking ID of the marker 200 indicates the type of work object 300 or indicates the vocode pattern 203. In FIG. 8, the vertical axis indicates the amount of change in luminance, and the horizontal axis indicates time.
図9は、イベントカメラによるビット検出方法について詳細説明を示す図である。図に示されるとおり、イベントカメラは、イベントの発生と消滅(On/Off)の認識しかできず、イベントの継続は検知できない。すなわち、LEDのOn/Offのエッジのみイベントとして検出可能となる。マンチェスタ符号化は、00、01、10、11のいずれのビット連続においても必ずOn/Offの入れ替わりが発生するため、イベントカメラによるビット読み出しに適している。図9においては、ビット「0」:On→Offを示し、ビット「1」:Off→Onを示す。
Figure 9 is a diagram that provides a detailed explanation of the bit detection method used by an event camera. As shown in the diagram, an event camera can only recognize the occurrence and disappearance (On/Off) of an event, and cannot detect the continuation of an event. In other words, only the On/Off edges of an LED can be detected as an event. Manchester encoding is suitable for bit reading by an event camera because any bit sequence of 00, 01, 10, or 11 always results in an On/Off switch. In Figure 9, bit "0" indicates On → Off, and bit "1" indicates Off → On.
LED点滅のオン/オフにおいて、点滅IDを示す点滅の始まりと終わりを示すために、点滅がオフ状態であることを示すスリープ区間がもうけられており、所定時間オフ状態になっている。
When the LED blinks on and off, a sleep period is provided to indicate the start and end of the blinking that indicates the blinking ID, and the blinking is off for a specified period of time.
カメラ制御装置100は、この点滅のオン/オフを時系列解析することにより点滅IDを認識することができる。
The camera control device 100 can recognize the blinking ID by analyzing the blinking on/off status over time.
つぎに、点滅ID認識処理およびマーカ姿勢算出処理における変形例について説明する。図10は、その変形例におけるカメラ制御装置100の動作を示すフローチャートである。この処理においては、イベントカメラ101のフォーカスは最初から無限遠に調整されている(S201)。そして、イベントカメラ101および制御部102は、マーカ200の点滅IDの認識処理と、マーカ200の姿勢算出処理とを並行して行っている(S202~S206)。この処理S202~処理S203は、複数のマーカ200に対して行われる。
Next, a modified example of the blinking ID recognition process and the marker attitude calculation process will be described. FIG. 10 is a flowchart showing the operation of the camera control device 100 in this modified example. In this process, the focus of the event camera 101 is adjusted to infinity from the beginning (S201). Then, the event camera 101 and the control unit 102 perform the blinking ID recognition process of the marker 200 and the attitude calculation process of the marker 200 in parallel (S202 to S206). The process S202 to process S203 are performed for multiple markers 200.
すなわち、制御部102は、点滅IDを認識しつつ(S202、S203)、蓄積されたボコードパターンを認識して、マーカ200の姿勢を算出する(S204~S206)。なお、ボコードパターンの認識等は、点滅IDによる認識結果に基づいて行われるため、最初の処理S204~処理S206は、処理S202~処理S203による点滅IDの認識後に行われる。
In other words, while recognizing the blinking ID (S202, S203), the control unit 102 recognizes the stored vocode pattern and calculates the posture of the marker 200 (S204 to S206). Note that since the recognition of the vocode pattern is performed based on the recognition result of the blinking ID, the initial steps S204 to S206 are performed after the blinking ID is recognized by steps S202 to S203.
この変形例におけるカメラ制御装置100においては、フォーカスの調節を最初から無限遠に調整しているため、フォーカス処理を省略することができ、処理が簡易になる。
In this modified example of the camera control device 100, the focus is adjusted to infinity from the beginning, so focus processing can be omitted, simplifying processing.
つぎに、カメラ制御装置100の他の適用事例について説明する。図11は、カメラ制御装置100をVRゴーグル100bに適用した事例を示す図である。図に示されるとおり、ユーザUは、カメラ制御装置100を備えたVRゴーグル100bを装着している。VRゴーグル100bは、ユーザに仮想空間を提供するゴーグルである。このVRゴーグル100bを装着しているユーザは、仮想空間に没入するため、現実空間にあるものをみることができない。この事例においては、VRゴーグル100bを装着したユーザは、現実空間にいながら、その現実空間を模した仮想空間を移動することができる。現実空間に存在する制御対象物300aは仮想空間に表現されつつ、ボコードパターン等を認識することによって、その位置関係が仮想空間に表現される。
Next, another application example of the camera control device 100 will be described. FIG. 11 is a diagram showing an example in which the camera control device 100 is applied to VR goggles 100b. As shown in the figure, a user U wears VR goggles 100b equipped with the camera control device 100. The VR goggles 100b are goggles that provide a virtual space to the user. A user wearing this VR goggles 100b is immersed in the virtual space and cannot see anything in the real space. In this example, a user wearing the VR goggles 100b can move through a virtual space that imitates the real space while being in the real space. A controlled object 300a that exists in the real space is represented in the virtual space, and its positional relationship is represented in the virtual space by recognizing a vocode pattern, etc.
この開示においては、現実空間は室内であり、仮想空間は現実空間は異なる任意の空間が表現されていることを想定している。現実空間に存在する家電、家具等は、仮想空間内においては、障害物や目標物となることを想定しているが、その形状は仮想空間の内容に合わせて表現されている。
In this disclosure, it is assumed that the real space is an interior room, and the virtual space is an arbitrary space that is different from the real space. It is assumed that home appliances, furniture, etc. that exist in the real space will become obstacles or targets in the virtual space, and their shapes will be represented according to the content of the virtual space.
現実空間における室内環境に設置された照明、エアコン(家電のインジゲータ等)にマーカ200(ボコード)が備え付けられており、VRゴーグル100b(カメラ制御装置100)は、それらマーカ200の種別、姿勢等を認識することができる。そして、VRゴーグル100bは、VRゴーグル利用者の室内における位置関係を、マーカ200に埋め込まれたボコードから算出することができる。
Markers 200 (vocodes) are attached to lighting and air conditioners (appliance indicators, etc.) installed in the indoor environment in real space, and the VR goggles 100b (camera control device 100) can recognize the type, posture, etc. of these markers 200. The VR goggles 100b can then calculate the positional relationship of the VR goggles user in the room from the vocodes embedded in the markers 200.
図12は、VRゴーグル100bの機能を示すブロック図である。図に示されるとおり、VRゴーグル100bは、カメラ制御装置100,仮想空間提供部111、通信部112、およびディスプレイ113を含んで構成されている。
FIG. 12 is a block diagram showing the functions of the VR goggles 100b. As shown in the figure, the VR goggles 100b include a camera control device 100, a virtual space providing unit 111, a communication unit 112, and a display 113.
仮想空間提供部111は、画像処理によって仮想空間を生成する部分であって、カメラ制御装置100で判断された制御対象物300aに基づいて仮想空間用の画像を生成し、ユーザとの位置関係および姿勢を仮想空間内に表現する部分である。
The virtual space providing unit 111 is a part that generates a virtual space by image processing, generates an image for the virtual space based on the control object 300a determined by the camera control device 100, and represents the positional relationship and posture with respect to the user within the virtual space.
通信部112は、仮想空間の情報を外部サーバ等から取得する部分である。
The communication unit 112 is the part that acquires information about the virtual space from an external server, etc.
ディスプレイ113は、仮想空間提供部111により提供された仮想空間およびメッセージ等をユーザに表示する部分である。
The display 113 is a part that displays the virtual space and messages provided by the virtual space providing unit 111 to the user.
このVRゴーグル100bは、以下の動作を行う。VRゴーグル100b(カメラ制御装置100)は、遠方時においては、複数の制御対象物に添付されているマーカ200(光源)が点滅している点滅IDを識別する。
The VR goggles 100b perform the following operations: When in a distant location, the VR goggles 100b (camera control device 100) identifies the blinking ID of the blinking markers 200 (light sources) attached to multiple control objects.
そして、VRゴーグル100bは、点滅IDに基づいた制御対象物300aを含んだVRアプリ上の仮想空間の生成およびユーザの歩行を表現する、制御部102は、点滅IDに応じて制御対象物の種類を判断し、仮想空間提供部111は、その制御対象物300a(テレビ、エアコン、照明灯)に応じた画像を生成する。ユーザは仮想空間にいるため、その制御対象物300aの角度を調整した画像を生成する。
Then, the VR goggles 100b generate a virtual space on the VR app that includes the controlled object 300a based on the blinking ID and represent the user's walking. The control unit 102 determines the type of controlled object according to the blinking ID, and the virtual space providing unit 111 generates an image according to the controlled object 300a (television, air conditioner, lighting). Since the user is in the virtual space, an image is generated with the angle of the controlled object 300a adjusted.
すなわち、近接時においては、VRゴーグル100bは、接近している制御対象物の点滅IDとともに、制御対象物300aとVRゴーグル100bとの位置関係を算出する。そして、VRゴーグル100bは、算出された位置関係をもとに、VRアプリ上で仮想空間を生成し、また必要に応じて制御対象物300bに対する接近警告のメッセージの生成を行う。
In other words, when the control object 300a approaches the VR goggles 100b, the VR goggles 100b calculate the positional relationship between the control object 300a and the VR goggles 100b, along with the blinking ID of the approaching control object. The VR goggles 100b then generate a virtual space on the VR app based on the calculated positional relationship, and also generate an approach warning message for the control object 300b as necessary.
また、VRゴーグル100bは、ユーザの移動および現実空間の障害物検出などに利用することができる。図11では、照明、エアコンなどの制御対象物300aとしているが、これに限るものではない。椅子、テーブル等に備え付けてもよいし、その他、観葉植物などについては障害物として検出可能としてもよい。
The VR goggles 100b can also be used to detect the user's movement and obstacles in the real world. In FIG. 11, the controlled objects 300a are lighting, air conditioners, etc., but this is not limited to these. They may be attached to chairs, tables, etc., and other objects such as potted plants may also be detectable as obstacles.
上記の通り、現実空間上の物体にマーカ200(ボコード)を内蔵させ、VRゴーグルを装着したユーザとのインタラクションを表現することができる。
As mentioned above, by embedding markers 200 (vocodes) in objects in real space, it is possible to express interactions with a user wearing VR goggles.
つぎに、マーカ200の他の事例について説明する。上記マーカ200は、ボコードを備えたデバイスであったが、これに限るものではなく、ボコードを備えなくてもよい。図13は、マーカ200aの他の例を示す図である。図13(a)に示されるとおり、マーカ200aには一つのパターン画像が描かれている。このマーカ200aは、マーカ200と同様に点滅する。図13(b)は、マーカ200aの概略構成図である。図に示されるとおり、マーカ200aは、LED201aとパターン画像203a(透明板)とが重ねられて構成されている。LED201aは、図示しない制御部により点滅動作を行う。上記したとおりこの点滅動作は、マンチェスタ符号されたオン/オフデータに基づいたものがよいが、それに限らなくてもよい。
Next, other examples of the marker 200 will be described. The above-mentioned marker 200 was a device equipped with vocode, but this is not limited thereto, and the marker 200 does not need to be equipped with vocode. Fig. 13 is a diagram showing another example of the marker 200a. As shown in Fig. 13(a), one pattern image is drawn on the marker 200a. This marker 200a blinks in the same way as the marker 200. Fig. 13(b) is a schematic diagram showing the configuration of the marker 200a. As shown in the figure, the marker 200a is configured by overlapping an LED 201a and a pattern image 203a (transparent plate). The LED 201a blinks under the control of a control unit (not shown). As mentioned above, this blinking operation is preferably based on Manchester-encoded on/off data, but is not limited thereto.
図14は、本開示のカメラ制御装置100cを備えたロボット100aである。図3の機能構成とは同じであるが、制御部102aにおける制御動作が異なる。すなわち、制御部102aは、イベントカメラ101が撮影したマーカ200aの形状の傾きおよび模様の傾きを判断し、それに応じて、マーカ200a(作業対象物300等)の姿勢を判断することができる。また、制御部102aは、イベントカメラ101のフォーカスは常にマーカ200aとの距離に応じて適切に合わせている。
FIG. 14 shows a robot 100a equipped with a camera control device 100c of the present disclosure. The functional configuration is the same as that of FIG. 3, but the control operation in the control unit 102a is different. That is, the control unit 102a determines the inclination of the shape and pattern of the marker 200a photographed by the event camera 101, and can determine the posture of the marker 200a (work object 300, etc.) accordingly. In addition, the control unit 102a always appropriately adjusts the focus of the event camera 101 according to the distance to the marker 200a.
図15は、カメラ制御装置100cの動作を示すフローチャートである。イベントカメラ101は、距離に応じてフォーカスを調節し(S401)、制御部102aは、点滅する光源を検索、認識、およびトラッキングを行う(S402)。制御部102aは、光源を時系列解析して、点滅IDを認識する(S403)。制御部102aは、イベントカメラ101が取得したイベントデータを時系列に蓄積する(S404)。制御部102aは、蓄積したイベントデータからパターンの認識を行い、そのパターンからマーカ200aの姿勢を算出する(S406)。この事例においては、制御部102aは、マーカ200aの形状およびそのパターンの傾き等から作業対象物300の姿勢を算出することができる。
FIG. 15 is a flowchart showing the operation of the camera control device 100c. The event camera 101 adjusts the focus according to the distance (S401), and the control unit 102a searches for, recognizes, and tracks the blinking light source (S402). The control unit 102a performs a time series analysis of the light source and recognizes the blinking ID (S403). The control unit 102a accumulates the event data acquired by the event camera 101 in time series (S404). The control unit 102a recognizes a pattern from the accumulated event data, and calculates the attitude of the marker 200a from the pattern (S406). In this example, the control unit 102a can calculate the attitude of the work object 300 from the shape of the marker 200a and the inclination of the pattern, etc.
本開示のカメラ制御装置100は以下の作用効果を奏する。なお、以降の説明において、作業対象物300を例に説明するが、制御対象物300aに対しても適用することができる。
The camera control device 100 of the present disclosure provides the following operational effects. Note that in the following explanation, the work object 300 is used as an example, but it can also be applied to the control object 300a.
本開示のカメラ制御装置100において、イベントカメラ101は、撮影対象物であるマーカ200の光量変化(点滅)に基づいて撮影画像を取得する。そして、制御部102は、撮影画像からマーカ200による光量変化(点滅)を検出することにより、パターン画像を取得する。そして、制御部102は、マーカ200との位置関係を判断する。
In the camera control device 100 of the present disclosure, the event camera 101 acquires a captured image based on the change in the amount of light (blinking) of the marker 200, which is the object of capture. The control unit 102 then acquires a pattern image by detecting the change in the amount of light (blinking) caused by the marker 200 from the captured image. The control unit 102 then determines the positional relationship with the marker 200.
この構成によれば、イベントカメラ101のような低消費電力、高速処理が可能なカメラを用いて、パターン画像を認識することができ、パターン画像の傾き具合などからその位置関係を判断することができる。
With this configuration, a camera capable of low power consumption and high speed processing, such as the event camera 101, can be used to recognize pattern images, and their positional relationship can be determined from the degree of inclination of the pattern images, etc.
また、制御部102は、所定距離以内にあるマーカ200に対して、イベントカメラ101のフォーカスが遠距離または無限遠に設定されている状態で認識可能になるパターン画像(ボコードパターン203)を、マーカ200から取得するようにしてもよい。
The control unit 102 may also acquire a pattern image (vocode pattern 203) from the marker 200 that becomes recognizable when the focus of the event camera 101 is set to a long distance or infinity for a marker 200 that is within a predetermined distance.
この構成によれば、イベントカメラ101のような低消費電力、高速処理が可能なカメラを用いて、ボコードのようなパターン画像を認識することができ、その位置関係を判断することができる。すなわち、イベントカメラ101は、撮影対象物が動作していないとその対象を取得することができないが、点滅動作することで撮影が可能になる。
With this configuration, a camera with low power consumption and high speed processing such as the event camera 101 can be used to recognize pattern images such as vocodes and determine their positional relationship. In other words, the event camera 101 cannot capture an object if the object is not moving, but it can capture an image by blinking.
本開示において、マーカ200は、所定パターンで点滅する光源を有するボコードである。イベントカメラ101は、ボコードに含まれているボコードパターン203(パターン画像)を取得する。
In this disclosure, the marker 200 is a vocode having a light source that blinks in a predetermined pattern. The event camera 101 acquires a vocode pattern 203 (pattern image) contained in the vocode.
この構成によれば、イベントカメラ101と、ボコードであるマーカ200を使って、作業対象物300の認識とその状態(姿勢)とを、低消費電力で、かつ高速に認識することができる。
With this configuration, the event camera 101 and the vocoded marker 200 can be used to recognize the work object 300 and its state (posture) at high speed and with low power consumption.
また、制御部102は、マーカ200(撮影対象物)との距離が所定値以下である場合に、イベントカメラ101のフォーカスを無限大または遠距離に設定する。そして、制御部102は、パターン画像に基づいて、マーカ200(撮影対象物)の状態(姿勢等)を判断する。
In addition, the control unit 102 sets the focus of the event camera 101 to infinity or a long distance when the distance to the marker 200 (photographed object) is equal to or less than a predetermined value. The control unit 102 then determines the state (posture, etc.) of the marker 200 (photographed object) based on the pattern image.
この構成によれば、遠距離時においては、光量変化でマーカ200(すなわち、作業対象物300)を識別し、近距離時においては、イベントカメラ101のフォーカスを無限大等にすることで、ボコードのようなボコードパターンを識別可能にする。よって、マーカ200の姿勢を判断することができる。
With this configuration, when the distance is long, the marker 200 (i.e., the work object 300) is identified by changes in the amount of light, and when the distance is short, a vocode pattern such as a vocode can be identified by focusing the event camera 101 at infinity, etc. Thus, the posture of the marker 200 can be determined.
本開示において、制御部102は、パターン画像に基づいて、マーカ200またはマーカ200aの状態を判断する。パターン画像に基づくとは、マーカ200aであれば、認識されたボコードパターンであり、マーカ200bであれば、パターン画像の傾きに基づくことを示し、そのパターン画像から、マーカ200等の状態(姿勢)を判断できる。
In the present disclosure, the control unit 102 determines the state of the marker 200 or marker 200a based on a pattern image. "Based on a pattern image" refers to a recognized vocode pattern for marker 200a, and based on the tilt of the pattern image for marker 200b, and the state (posture) of the marker 200, etc. can be determined from the pattern image.
また、制御部102は、マーカ200の光量変化(点滅)で示される点滅IDに基づいて、作業対象物300を識別する。そして、制御部102は、識別した作業対象物300に応じて、当該作業対象物の状態を判断する。
The control unit 102 also identifies the work object 300 based on the blinking ID indicated by the change in the light intensity (blinking) of the marker 200. Then, the control unit 102 determines the state of the work object 300 according to the identified work object.
制御部102は、マーカ200の点滅IDに応じてそのマーカ200が付された作業対象物300の種別を判断することができる。よって、その種別に応じて作業対象物300の状態、例えば姿勢を判断することができる。
The control unit 102 can determine the type of work object 300 to which the marker 200 is attached based on the blinking ID of the marker 200. Therefore, the state of the work object 300, for example, its posture, can be determined based on the type.
本開示において、カメラ制御装置100は、イベントカメラ101が取得した撮影画像を時系列で記憶する記憶部をさらに備える。制御部102は、撮影画像に基づいて点滅パターンを判断し、当該点滅パターンに基づいて作業対象物300(点滅ID)を判断する。
In the present disclosure, the camera control device 100 further includes a storage unit that stores the captured images acquired by the event camera 101 in chronological order. The control unit 102 determines the blinking pattern based on the captured images, and determines the work object 300 (blinking ID) based on the blinking pattern.
一般的にイベントカメラ101は、光量変化を認識できるが、その時間は分からない。本開示においては、イベントカメラ101の特性を利用しつつ、その点滅パターンを認識することで、作業対象物300を認識することができる。
Generally, an event camera 101 can recognize changes in the amount of light, but cannot tell the time. In this disclosure, the characteristics of the event camera 101 are utilized to recognize the blinking pattern, making it possible to recognize the work object 300.
本開示において、ロボット100aは、カメラ制御装置100を備える。このロボット100aは、作業対象物300を保持するためのアーム105と、アーム105を、カメラ制御装置100により判断された作業対象物300の姿勢に応じて、動作させるアーム用制御部103とを備える。
In this disclosure, the robot 100a includes a camera control device 100. The robot 100a includes an arm 105 for holding a work object 300, and an arm control unit 103 for operating the arm 105 according to the posture of the work object 300 determined by the camera control device 100.
この構成によれば、カメラ制御装置100を利用して、高度な作業を可能にする。
This configuration makes it possible to perform advanced tasks using the camera control device 100.
また、本開示において、VRゴーグル100bは、カメラ制御装置100を備える。このVRゴーグル100bにおいて、仮想空間提供部111は、制御対象物300aおよびその姿勢に基づいた仮想物体を仮想空間に配置して、ユーザに当該仮想空間を提供する。
Furthermore, in this disclosure, the VR goggles 100b include a camera control device 100. In this VR goggles 100b, the virtual space providing unit 111 places a virtual object based on the controlled object 300a and its posture in a virtual space, and provides the virtual space to the user.
この構成によれば、カメラ制御装置100を利用して、仮想空間における仮想物体の姿勢等を現実空間における物体に応じて表すことができる。
With this configuration, the camera control device 100 can be used to represent the posture, etc., of a virtual object in virtual space in accordance with an object in real space.
本開示のカメラ制御装置およびこれを備えた装置配下の構成を有する。
Has the camera control device disclosed herein and a configuration under the device equipped with the same.
[1]
撮影対象物の光量変化に基づいて撮影画像を取得するイベントカメラと、
前記撮影画像から前記撮影対象物による光量変化を検出することによりパターン画像を取得する制御部と、
を備え、
前記制御部は、前記パターン画像に基づいて前記撮影対象物との位置関係を判断する、
カメラ制御装置。 [1]
an event camera that captures images based on changes in the amount of light of an object to be photographed;
a control unit that acquires a pattern image by detecting a change in light amount caused by the object to be photographed from the photographed image;
Equipped with
The control unit determines a positional relationship with the object to be photographed based on the pattern image.
Camera control device.
撮影対象物の光量変化に基づいて撮影画像を取得するイベントカメラと、
前記撮影画像から前記撮影対象物による光量変化を検出することによりパターン画像を取得する制御部と、
を備え、
前記制御部は、前記パターン画像に基づいて前記撮影対象物との位置関係を判断する、
カメラ制御装置。 [1]
an event camera that captures images based on changes in the amount of light of an object to be photographed;
a control unit that acquires a pattern image by detecting a change in light amount caused by the object to be photographed from the photographed image;
Equipped with
The control unit determines a positional relationship with the object to be photographed based on the pattern image.
Camera control device.
[2]
前記制御部は、
前記撮影画像から前記撮影対象物による光量変化を検出するとともに、所定距離以内にある前記撮影対象物に対して前記イベントカメラのフォーカスが遠距離または無限遠に設定されている状態で認識可能になるパターン画像を、前記撮影対象物から取得する、
[1]に記載のカメラ制御装置。 [2]
The control unit is
detecting a change in light amount caused by the object from the captured image, and acquiring a pattern image from the object that is within a predetermined distance and that is recognizable when the focus of the event camera is set to a long distance or infinity;
The camera control device according to [1].
前記制御部は、
前記撮影画像から前記撮影対象物による光量変化を検出するとともに、所定距離以内にある前記撮影対象物に対して前記イベントカメラのフォーカスが遠距離または無限遠に設定されている状態で認識可能になるパターン画像を、前記撮影対象物から取得する、
[1]に記載のカメラ制御装置。 [2]
The control unit is
detecting a change in light amount caused by the object from the captured image, and acquiring a pattern image from the object that is within a predetermined distance and that is recognizable when the focus of the event camera is set to a long distance or infinity;
The camera control device according to [1].
[3]
前記撮影対象物は、所定パターンで点滅する光源を有するボコードを含み、
前記制御部は、前記ボコードに含まれている前記パターン画像を取得する、
[2]に記載のカメラ制御装置。 [3]
The subject includes a vocode having a light source that blinks in a predetermined pattern;
The control unit acquires the pattern image included in the vocode.
The camera control device according to [2].
前記撮影対象物は、所定パターンで点滅する光源を有するボコードを含み、
前記制御部は、前記ボコードに含まれている前記パターン画像を取得する、
[2]に記載のカメラ制御装置。 [3]
The subject includes a vocode having a light source that blinks in a predetermined pattern;
The control unit acquires the pattern image included in the vocode.
The camera control device according to [2].
[4]
前記制御部は、前記撮影対象物との距離が所定値以下である場合に、前記イベントカメラのフォーカスを無限大または遠距離に設定する、
[2]または[3]に記載のカメラ制御装置。 [4]
the control unit sets a focus of the event camera to infinity or a long distance when a distance to the object to be photographed is equal to or less than a predetermined value.
The camera control device according to [2] or [3].
前記制御部は、前記撮影対象物との距離が所定値以下である場合に、前記イベントカメラのフォーカスを無限大または遠距離に設定する、
[2]または[3]に記載のカメラ制御装置。 [4]
the control unit sets a focus of the event camera to infinity or a long distance when a distance to the object to be photographed is equal to or less than a predetermined value.
The camera control device according to [2] or [3].
[5]
前記制御部は、
前記パターン画像に基づいて、前記撮影対象物の状態を判断する、
[1]から[4]のいずれか一に記載のカメラ制御装置。 [5]
The control unit is
determining a state of the object to be photographed based on the pattern image;
The camera control device according to any one of [1] to [4].
前記制御部は、
前記パターン画像に基づいて、前記撮影対象物の状態を判断する、
[1]から[4]のいずれか一に記載のカメラ制御装置。 [5]
The control unit is
determining a state of the object to be photographed based on the pattern image;
The camera control device according to any one of [1] to [4].
[6]
前記制御部は、
前記光量変化で示されるID情報に基づいて、前記撮影対象物を識別し、
識別した撮影対象物に応じて、当該撮影対象物の状態を判断する、
[5]に記載のカメラ制御装置。 [6]
The control unit is
Identifying the object to be photographed based on ID information indicated by the change in the amount of light;
determining a state of the photographed object according to the identified photographed object;
The camera control device according to [5].
前記制御部は、
前記光量変化で示されるID情報に基づいて、前記撮影対象物を識別し、
識別した撮影対象物に応じて、当該撮影対象物の状態を判断する、
[5]に記載のカメラ制御装置。 [6]
The control unit is
Identifying the object to be photographed based on ID information indicated by the change in the amount of light;
determining a state of the photographed object according to the identified photographed object;
The camera control device according to [5].
[7]
前記イベントカメラが取得した撮影画像を時系列で記憶する記憶部をさらに備え、
前記制御部は、前記撮影画像に基づいて点滅パターンを判断し、当該点滅パターンに基づいて前記撮影対象物を判断する、
[6]に記載のカメラ制御装置。 [7]
A storage unit that stores the images captured by the event camera in chronological order,
The control unit determines a blinking pattern based on the captured image, and determines the object to be photographed based on the blinking pattern.
The camera control device according to [6].
前記イベントカメラが取得した撮影画像を時系列で記憶する記憶部をさらに備え、
前記制御部は、前記撮影画像に基づいて点滅パターンを判断し、当該点滅パターンに基づいて前記撮影対象物を判断する、
[6]に記載のカメラ制御装置。 [7]
A storage unit that stores the images captured by the event camera in chronological order,
The control unit determines a blinking pattern based on the captured image, and determines the object to be photographed based on the blinking pattern.
The camera control device according to [6].
[8]
[1]から[7]のいずれか一に記載のカメラ制御装置を備える装置であって、
前記撮影対象物を保持するためのアームと、
前記アームを、前記カメラ制御装置により判断された前記撮影対象物との位置関係に基づいて、動作させるアーム用制御部と、
を備える装置。 [8]
An apparatus including the camera control device according to any one of [1] to [7],
an arm for holding the photographing object;
an arm control unit that operates the arm based on a positional relationship with the object to be photographed determined by the camera control device;
An apparatus comprising:
[1]から[7]のいずれか一に記載のカメラ制御装置を備える装置であって、
前記撮影対象物を保持するためのアームと、
前記アームを、前記カメラ制御装置により判断された前記撮影対象物との位置関係に基づいて、動作させるアーム用制御部と、
を備える装置。 [8]
An apparatus including the camera control device according to any one of [1] to [7],
an arm for holding the photographing object;
an arm control unit that operates the arm based on a positional relationship with the object to be photographed determined by the camera control device;
An apparatus comprising:
[9]
[1]から[7]のいずれか一に記載のカメラ制御装置を備えるVRゴーグル型の装置であって、
前記撮影対象物および前記位置関係に基づいた仮想物体を仮想空間に配置して、ユーザに当該仮想空間を提供する仮想空間提供部、
を備える装置。 [9]
A VR goggle-type device comprising the camera control device according to any one of [1] to [7],
a virtual space providing unit that arranges the subject to be photographed and a virtual object based on the positional relationship in a virtual space and provides the virtual space to a user;
An apparatus comprising:
[1]から[7]のいずれか一に記載のカメラ制御装置を備えるVRゴーグル型の装置であって、
前記撮影対象物および前記位置関係に基づいた仮想物体を仮想空間に配置して、ユーザに当該仮想空間を提供する仮想空間提供部、
を備える装置。 [9]
A VR goggle-type device comprising the camera control device according to any one of [1] to [7],
a virtual space providing unit that arranges the subject to be photographed and a virtual object based on the positional relationship in a virtual space and provides the virtual space to a user;
An apparatus comprising:
上記実施形態の説明に用いたブロック図は、機能単位のブロックを示している。これらの機能ブロック(構成部)は、ハードウェアおよびソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的または論理的に結合した1つの装置を用いて実現されてもよいし、物理的または論理的に分離した2つ以上の装置を直接的または間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置または上記複数の装置にソフトウェアを組み合わせて実現されてもよい。
The block diagrams used to explain the above embodiments show functional blocks. These functional blocks (components) are realized by any combination of at least one of hardware and software. Furthermore, there are no particular limitations on the method of realizing each functional block. That is, each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and connected directly or indirectly (e.g., using wires, wirelessly, etc.) and these multiple devices. The functional blocks may be realized by combining the one device or the multiple devices with software.
機能には、判断、決定、判定、計算、算出、処理、導出、調査、探索、確認、受信、送信、出力、アクセス、解決、選択、選定、確立、比較、想定、期待、見做し、報知(broadcasting)、通知(notifying)、通信(communicating)、転送(forwarding)、構成(configuring)、再構成(reconfiguring)、割り当て(allocating、mapping)、割り振り(assigning)などがあるが、これらに限られない。たとえば、送信を機能させる機能ブロック(構成部)は、送信部(transmitting unit)や送信機(transmitter)と呼称される。いずれも、上述したとおり、実現方法は特に限定されない。
Functions include, but are not limited to, judgement, determination, judgment, calculation, computation, processing, derivation, investigation, search, confirmation, reception, transmission, output, access, resolution, selection, election, establishment, comparison, assumption, expectation, regard, broadcasting, notifying, communicating, forwarding, configuring, reconfiguring, allocating, mapping, and assignment. For example, a functional block (component) that performs the transmission function is called a transmitting unit or transmitter. As mentioned above, there are no particular limitations on the method of realization for either of these.
例えば、本開示の一実施の形態におけるカメラ制御装置100などは、本開示のカメラ制御方法の処理を行うコンピュータとして機能してもよい。図16は、本開示の一実施の形態に係るカメラ制御装置100、100cのハードウェア構成の一例を示す図である。上述のカメラ制御装置100は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。なお、特に断りが無い限り、以降の記載においてカメラ制御装置100は、カメラ制御装置100cを含む。
For example, the camera control device 100 in one embodiment of the present disclosure may function as a computer that performs processing of the camera control method of the present disclosure. FIG. 16 is a diagram showing an example of the hardware configuration of the camera control devices 100 and 100c in one embodiment of the present disclosure. The camera control device 100 described above may be physically configured as a computer device including a processor 1001, memory 1002, storage 1003, communication device 1004, input device 1005, output device 1006, bus 1007, and the like. In the following description, the camera control device 100 includes the camera control device 100c unless otherwise specified.
なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。カメラ制御装置100のハードウェア構成は、図に示した各装置を1つまたは複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。
In the following description, the word "apparatus" can be interpreted as a circuit, device, unit, etc. The hardware configuration of the camera control device 100 may be configured to include one or more of the devices shown in the figure, or may be configured to exclude some of the devices.
カメラ制御装置100における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることによって、プロセッサ1001が演算を行い、通信装置1004による通信を制御したり、メモリ1002およびストレージ1003におけるデータの読み出しおよび書き込みの少なくとも一方を制御したりすることによって実現される。
Each function of the camera control device 100 is realized by loading specific software (programs) onto hardware such as the processor 1001 and memory 1002, causing the processor 1001 to perform calculations, control communications via the communication device 1004, and control at least one of the reading and writing of data in the memory 1002 and storage 1003.
プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインターフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)によって構成されてもよい。例えば、上述の制御部102は、プロセッサ1001によって実現されてもよい。
The processor 1001, for example, runs an operating system to control the entire computer. The processor 1001 may be configured as a central processing unit (CPU) including an interface with peripheral devices, a control device, an arithmetic unit, registers, etc. For example, the above-mentioned control unit 102 may be realized by the processor 1001.
また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュール、データなどを、ストレージ1003および通信装置1004の少なくとも一方からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態において説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。例えば、制御部102は、メモリ1002に格納され、プロセッサ1001において動作する制御プログラムによって実現されてもよく、他の機能ブロックについても同様に実現されてもよい。上述の各種処理は、1つのプロセッサ1001によって実行される旨を説明してきたが、2以上のプロセッサ1001により同時または逐次に実行されてもよい。プロセッサ1001は、1以上のチップによって実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。
The processor 1001 also reads out programs (program codes), software modules, data, etc. from at least one of the storage 1003 and the communication device 1004 into the memory 1002, and executes various processes according to these. The programs used are those that cause the computer to execute at least some of the operations described in the above-mentioned embodiments. For example, the control unit 102 may be realized by a control program stored in the memory 1002 and running on the processor 1001, and similarly may be realized for other functional blocks. Although the above-mentioned various processes have been described as being executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. The processor 1001 may be implemented by one or more chips. The programs may be transmitted from a network via a telecommunications line.
メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つによって構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本開示の一実施の形態に係るカメラ制御方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。
Memory 1002 is a computer-readable recording medium, and may be composed of at least one of, for example, ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), RAM (Random Access Memory), etc. Memory 1002 may also be called a register, cache, main memory (primary storage device), etc. Memory 1002 can store executable programs (program codes), software modules, etc. for implementing a camera control method according to one embodiment of the present disclosure.
ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CD-ROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つによって構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002およびストレージ1003の少なくとも一方を含むデータベース、サーバその他の適切な媒体であってもよい。
Storage 1003 is a computer-readable recording medium, and may be, for example, at least one of an optical disk such as a CD-ROM (Compact Disc ROM), a hard disk drive, a flexible disk, a magneto-optical disk (e.g., a compact disk, a digital versatile disk, a Blu-ray (registered trademark) disk), a smart card, a flash memory (e.g., a card, a stick, a key drive), a floppy (registered trademark) disk, a magnetic strip, etc. Storage 1003 may also be referred to as an auxiliary storage device. The above-mentioned storage medium may be, for example, a database, a server, or other suitable medium including at least one of memory 1002 and storage 1003.
通信装置1004は、有線ネットワークおよび無線ネットワークの少なくとも一方を介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。通信装置1004は、例えば周波数分割複信(FDD:Frequency Division Duplex)および時分割複信(TDD:Time Division Duplex)の少なくとも一方を実現するために、高周波スイッチ、デュプレクサ、フィルタ、周波数シンセサイザなどを含んで構成されてもよい。
The communication device 1004 is hardware (transmitting/receiving device) for communicating between computers via at least one of a wired network and a wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, etc. The communication device 1004 may be configured to include a high-frequency switch, a duplexer, a filter, a frequency synthesizer, etc., to realize, for example, at least one of Frequency Division Duplex (FDD) and Time Division Duplex (TDD).
入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005および出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。
The input device 1005 is an input device (e.g., a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts input from the outside. The output device 1006 is an output device (e.g., a display, a speaker, an LED lamp, etc.) that performs output to the outside. Note that the input device 1005 and the output device 1006 may be integrated into one structure (e.g., a touch panel).
また、プロセッサ1001、メモリ1002などの各装置は、情報を通信するためのバス1007によって接続される。バス1007は、単一のバスを用いて構成されてもよいし、装置間ごとに異なるバスを用いて構成されてもよい。
Furthermore, each device such as the processor 1001 and memory 1002 is connected by a bus 1007 for communicating information. The bus 1007 may be configured using a single bus, or may be configured using different buses between each device.
なお、通信装置1004および入力装置1005は、本開示におけるカメラ制御装置100においては必須ではない。
Note that the communication device 1004 and the input device 1005 are not essential to the camera control device 100 of this disclosure.
また、カメラ制御装置100は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)などのハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部または全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つを用いて実装されてもよい。
The camera control device 100 may also be configured to include hardware such as a microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a programmable logic device (PLD), or a field programmable gate array (FPGA), and some or all of the functional blocks may be realized by the hardware. For example, the processor 1001 may be implemented using at least one of these pieces of hardware.
情報の通知は、本開示において説明した態様/実施形態に限られず、他の方法を用いて行われてもよい。例えば、情報の通知は、物理レイヤシグナリング(例えば、DCI(Downlink Control Information)、UCI(Uplink Control Information))、上位レイヤシグナリング(例えば、RRC(Radio Resource Control)シグナリング、MAC(Medium Access Control)シグナリング、報知情報(MIB(Master Information Block)、SIB(System Information Block)))、その他の信号またはこれらの組み合わせによって実施されてもよい。また、RRCシグナリングは、RRCメッセージと呼ばれてもよく、例えば、RRC接続セットアップ(RRC Connection Setup)メッセージ、RRC接続再構成(RRC Connection Reconfiguration)メッセージなどであってもよい。
The notification of information is not limited to the aspects/embodiments described in this disclosure, and may be performed using other methods. For example, the notification of information may be performed by physical layer signaling (e.g., DCI (Downlink Control Information), UCI (Uplink Control Information)), higher layer signaling (e.g., RRC (Radio Resource Control) signaling, MAC (Medium Access Control) signaling, broadcast information (MIB (Master Information Block), SIB (System Information Block)), other signals, or a combination of these. In addition, RRC signaling may be referred to as an RRC message, and may be, for example, an RRC Connection Setup message, an RRC Connection Reconfiguration message, etc.
本開示において説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。
The processing steps, sequences, flow charts, etc. of each aspect/embodiment described in this disclosure may be reordered unless inconsistent. For example, the methods described in this disclosure present elements of various steps using an example order and are not limited to the particular order presented.
入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、または追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。
The input and output information may be stored in a specific location (e.g., memory) or may be managed using a management table. The input and output information may be overwritten, updated, or added to. The output information may be deleted. The input information may be sent to another device.
判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:trueまたはfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。
The determination may be based on a value represented by one bit (0 or 1), a Boolean value (true or false), or a numerical comparison (e.g., with a predetermined value).
本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。
Each aspect/embodiment described in this disclosure may be used alone, in combination, or switched depending on the execution. In addition, notification of specific information (e.g., notification that "X is the case") is not limited to being done explicitly, but may be done implicitly (e.g., not notifying the specific information).
以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されるものではないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨および範囲を逸脱することなく修正および変更態様として実施することができる。したがって、本開示の記載は、例示説明を目的とするものであり、本開示に対して何ら制限的な意味を有するものではない。
Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described herein. The present disclosure can be implemented in modified and altered forms without departing from the spirit and scope of the present disclosure as defined by the claims. Therefore, the description of the present disclosure is intended as an illustrative example and does not have any limiting meaning with respect to the present disclosure.
ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。
Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)および無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、または他のリモートソースから送信される場合、これらの有線技術および無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。
Software, instructions, information, etc. may also be transmitted and received via a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using at least one of wired technologies (such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave, etc.), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.
本開示において説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、またはこれらの任意の組み合わせによって表されてもよい。
The information, signals, etc. described in this disclosure may be represented using any of a variety of different technologies. For example, the data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.
なお、本開示において説明した用語および本開示の理解に必要な用語については、同一のまたは類似する意味を有する用語と置き換えてもよい。例えば、チャネルおよびシンボルの少なくとも一方は信号(シグナリング)であってもよい。また、信号はメッセージであってもよい。また、コンポーネントキャリア(CC:Component Carrier)は、キャリア周波数、セル、周波数キャリアなどと呼ばれてもよい。
Note that the terms explained in this disclosure and the terms necessary for understanding this disclosure may be replaced with terms having the same or similar meanings. For example, at least one of the channel and the symbol may be a signal (signaling). Also, the signal may be a message. Also, a component carrier (CC) may be called a carrier frequency, a cell, a frequency carrier, etc.
また、本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。例えば、無線リソースはインデックスによって指示されるものであってもよい。
In addition, the information, parameters, etc. described in this disclosure may be represented using absolute values, may be represented using relative values from a predetermined value, or may be represented using other corresponding information. For example, a radio resource may be indicated by an index.
上述したパラメータに使用する名称はいかなる点においても限定的な名称ではない。さらに、これらのパラメータを使用する数式等は、本開示で明示的に開示したものと異なる場合もある。様々なチャネル(例えば、PUCCH、PDCCHなど)および情報要素は、あらゆる好適な名称によって識別できるので、これらの様々なチャネルおよび情報要素に割り当てている様々な名称は、いかなる点においても限定的な名称ではない。
The names used for the parameters described above are not intended to be limiting in any way. Furthermore, the formulas etc. using these parameters may differ from those explicitly disclosed in this disclosure. The various channels (e.g., PUCCH, PDCCH, etc.) and information elements may be identified by any suitable names, and the various names assigned to these various channels and information elements are not intended to be limiting in any way.
本開示においては、「移動局(MS:Mobile Station)」、「ユーザ端末(user terminal)」、「ユーザ装置(UE:User Equipment)」、「端末」などの用語は、互換的に使用され得る。
In this disclosure, terms such as "Mobile Station (MS)," "user terminal," "User Equipment (UE)," and "terminal" may be used interchangeably.
移動局は、当業者によって、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、またはいくつかの他の適切な用語で呼ばれる場合もある。
A mobile station may also be referred to by those skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology.
本開示で使用する「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベースまたは別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などで読み替えられてもよい。
As used in this disclosure, the terms "determining" and "determining" may encompass a wide variety of actions. "Determining" and "determining" may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining as "judging" or "determining." Also, "determining" and "determining" may include receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, accessing (e.g., accessing data in memory), and considering ascertaining as "judging" or "determining." Additionally, "judgment" and "decision" can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been "judged" or "decided." In other words, "judgment" and "decision" can include considering some action to have been "judged" or "decided." Additionally, "judgment (decision)" can be interpreted as "assuming," "expecting," "considering," etc.
「接続された(connected)」、「結合された(coupled)」という用語、またはこれらのあらゆる変形は、2またはそれ以上の要素間の直接的または間接的なあらゆる接続または結合を意味し、互いに「接続」または「結合」された2つの要素間に1またはそれ以上の中間要素が存在することを含むことができる。要素間の結合または接続は、物理的なものであっても、論理的なものであっても、或いはこれらの組み合わせであってもよい。例えば、「接続」は「アクセス」で読み替えられてもよい。本開示で使用する場合、2つの要素は、1またはそれ以上の電線、ケーブルおよびプリント電気接続の少なくとも一つを用いて、並びにいくつかの非限定的かつ非包括的な例として、無線周波数領域、マイクロ波領域および光(可視および不可視の両方)領域の波長を有する電磁エネルギーなどを用いて、互いに「接続」または「結合」されると考えることができる。
The terms "connected," "coupled," or any variation thereof, refer to any direct or indirect connection or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are "connected" or "coupled" to one another. The coupling or connection between elements may be physical, logical, or a combination thereof. For example, "connected" may be read as "access." As used in this disclosure, two elements may be considered to be "connected" or "coupled" to one another using at least one of one or more wires, cables, and printed electrical connections, as well as electromagnetic energy having wavelengths in the radio frequency range, microwave range, and optical (both visible and invisible) range, as some non-limiting and non-exhaustive examples.
本開示において使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。
As used in this disclosure, the phrase "based on" does not mean "based only on," unless expressly stated otherwise. In other words, the phrase "based on" means both "based only on" and "based at least on."
本開示において使用する「第1の」、「第2の」などの呼称を使用した要素へのいかなる参照も、それらの要素の量または順序を全般的に限定しない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本開示において使用され得る。したがって、第1および第2の要素への参照は、2つの要素のみが採用され得ること、または何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。
Any reference to an element using a designation such as "first," "second," etc., used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used in this disclosure as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed or that the first element must precede the second element in some way.
本開示において、「含む(include)」、「含んでいる(including)」およびそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本開示において使用されている用語「または(or)」は、排他的論理和ではないことが意図される。
When the terms "include," "including," and variations thereof are used in this disclosure, these terms are intended to be inclusive, similar to the term "comprising." Additionally, the term "or," as used in this disclosure, is not intended to be an exclusive or.
本開示において、例えば、英語でのa, anおよびtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。
In this disclosure, where articles have been added through translation, such as a, an, and the in English, this disclosure may include that the nouns following these articles are plural.
本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」などの用語も、「異なる」と同様に解釈されてもよい。
In this disclosure, the term "A and B are different" may mean "A and B are different from each other." The term may also mean "A and B are each different from C." Terms such as "separate" and "combined" may also be interpreted in the same way as "different."
100…カメラ制御装置、300…作業対象物、200…マーカ、201…LED、202…デヒューザー、203…ボコードパターン、204…レンズセット、100a…ロボット、101…イベントカメラ、102…制御部、103…アーム用制御部、104…アーム用モータ、105…アーム、106…車輪用制御部、107…車輪用モータ、108…車輪。
100...camera control device, 300...work object, 200...marker, 201...LED, 202...diffuser, 203...vocode pattern, 204...lens set, 100a...robot, 101...event camera, 102...control unit, 103...arm control unit, 104...arm motor, 105...arm, 106...wheel control unit, 107...wheel motor, 108...wheel.
Claims (9)
- 撮影対象物の光量変化に基づいて撮影画像を取得するイベントカメラと、
前記撮影画像から前記撮影対象物による光量変化を検出することによりパターン画像を取得する制御部と、
を備え、
前記制御部は、前記パターン画像に基づいて前記撮影対象物との位置関係を判断する、
カメラ制御装置。 an event camera that captures images based on changes in the amount of light of an object to be photographed;
a control unit that acquires a pattern image by detecting a change in light amount caused by the object to be photographed from the photographed image;
Equipped with
The control unit determines a positional relationship with the object to be photographed based on the pattern image.
Camera control device. - 前記制御部は、
前記撮影画像から前記撮影対象物による光量変化を検出するとともに、所定距離以内にある前記撮影対象物に対して前記イベントカメラのフォーカスが遠距離または無限遠に設定されている状態で認識可能になるパターン画像を、前記撮影対象物から取得する、
請求項1に記載のカメラ制御装置。 The control unit is
A change in the amount of light caused by the object to be photographed is detected from the photographed image, and a pattern image that can be recognized when the focus of the event camera is set to a long distance or infinity with respect to the object to be photographed within a predetermined distance is obtained from the object to be photographed.
The camera control device according to claim 1 . - 前記撮影対象物は、所定パターンで点滅する光源を有するボコードを含み、
前記制御部は、前記ボコードに含まれている前記パターン画像を取得する、
請求項2に記載のカメラ制御装置。 The subject includes a vocode having a light source that blinks in a predetermined pattern;
The control unit acquires the pattern image included in the vocode.
The camera control device according to claim 2 . - 前記制御部は、前記撮影対象物との距離が所定値以下である場合に、前記イベントカメラのフォーカスを無限大または遠距離に設定する、
請求項2に記載のカメラ制御装置。 the control unit sets a focus of the event camera to infinity or a long distance when a distance to the object to be photographed is equal to or less than a predetermined value.
The camera control device according to claim 2 . - 前記制御部は、
前記パターン画像に基づいて、前記撮影対象物の状態を判断する、
請求項1に記載のカメラ制御装置。 The control unit is
determining a state of the object to be photographed based on the pattern image;
The camera control device according to claim 1 . - 前記制御部は、
前記光量変化で示されるID情報に基づいて、前記撮影対象物を識別し、
識別した撮影対象物に応じて、当該撮影対象物の状態を判断する、
請求項5に記載のカメラ制御装置。 The control unit is
Identifying the object to be photographed based on ID information indicated by the change in the amount of light;
determining a state of the photographed object according to the identified photographed object;
The camera control device according to claim 5 . - 前記イベントカメラが取得した撮影画像を時系列で記憶する記憶部をさらに備え、
前記制御部は、前記撮影画像に基づいて点滅パターンを判断し、当該点滅パターンに基づいて前記撮影対象物のID情報を識別する、
請求項6に記載のカメラ制御装置。 A storage unit that stores the images captured by the event camera in chronological order,
The control unit determines a blinking pattern based on the captured image, and identifies ID information of the object to be photographed based on the blinking pattern.
The camera control device according to claim 6. - 請求項1に記載のカメラ制御装置を備える装置であって、
前記撮影対象物を保持するためのアームと、
前記アームを、前記カメラ制御装置により判断された前記撮影対象物との位置関係に基づいて、動作させるアーム用制御部と、
を備える装置。 An apparatus comprising the camera control device according to claim 1,
an arm for holding the photographing object;
an arm control unit that operates the arm based on a positional relationship with the object to be photographed determined by the camera control device;
An apparatus comprising: - 請求項1に記載のカメラ制御装置を備えるVRゴーグル型の装置であって、
前記撮影対象物および前記位置関係に基づいた仮想物体を仮想空間に配置して、ユーザに当該仮想空間を提供する仮想空間提供部、
を備える装置。
A VR goggle type device comprising the camera control device according to claim 1,
a virtual space providing unit that arranges the subject to be photographed and a virtual object based on the positional relationship in a virtual space and provides the virtual space to a user;
An apparatus comprising:
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022205769 | 2022-12-22 | ||
JP2022-205769 | 2022-12-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024135789A1 true WO2024135789A1 (en) | 2024-06-27 |
Family
ID=91588925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/046000 WO2024135789A1 (en) | 2022-12-22 | 2023-12-21 | Camera control device and apparatus equipped with same |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024135789A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110017826A1 (en) * | 2009-07-23 | 2011-01-27 | Massachusetts Institute Of Technology | Methods and apparatus for bokeh codes |
JP2013214254A (en) * | 2012-04-04 | 2013-10-17 | Nikon Corp | Information detection device |
JP2013214223A (en) * | 2012-04-03 | 2013-10-17 | Nikon Corp | Information output device and information detection device |
US20190096068A1 (en) * | 2017-09-28 | 2019-03-28 | Samsung Electronics Co., Ltd. | Camera pose and plane estimation using active markers and a dynamic vision sensor |
-
2023
- 2023-12-21 WO PCT/JP2023/046000 patent/WO2024135789A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110017826A1 (en) * | 2009-07-23 | 2011-01-27 | Massachusetts Institute Of Technology | Methods and apparatus for bokeh codes |
JP2013214223A (en) * | 2012-04-03 | 2013-10-17 | Nikon Corp | Information output device and information detection device |
JP2013214254A (en) * | 2012-04-04 | 2013-10-17 | Nikon Corp | Information detection device |
US20190096068A1 (en) * | 2017-09-28 | 2019-03-28 | Samsung Electronics Co., Ltd. | Camera pose and plane estimation using active markers and a dynamic vision sensor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3131235B1 (en) | Method and apparatus for controlling device | |
CN103400108A (en) | Face identification method and device as well as mobile terminal | |
US20200314850A1 (en) | Frequency band determination device, head-mounted display, frequency band determination method, and program | |
US9495004B2 (en) | Display device adjustment by control device | |
EP3171293A1 (en) | Method and device for controlling intelligent equipment | |
CN108375168B (en) | Method for adjusting air conditioning equipment | |
CN105681441A (en) | Data transmission method and apparatus | |
US20230152895A1 (en) | Method for Changing Displayed Scene, Intelligent Display Screen and Readable Storage Medium | |
JP2019168387A (en) | Building determination system | |
CN105242666B (en) | A kind of method and apparatus that control equipment is mobile | |
CN113778255B (en) | Touch recognition method and device | |
CN114915511B (en) | Control method and device of split equipment | |
WO2024135789A1 (en) | Camera control device and apparatus equipped with same | |
KR20160004712A (en) | Method for communicating wireless in accordance with an unauthorized frequency band and a electronic device implementing the same | |
KR20150136391A (en) | Method for providing service and an electronic device thereof | |
WO2023160507A1 (en) | Virtual reality device, lens barrel positional state detection method and apparatus therefor, and medium | |
CN113873593B (en) | Network switching system, method, device and storage medium | |
US10869292B2 (en) | Conditionally providing location-based functions | |
KR101614941B1 (en) | Method for pairing a first terminal with at lesat one terminal selected among second terminals by using rssi information, terminal and computer-readable recording media using the same | |
JP2018128917A (en) | Input system | |
WO2020075364A1 (en) | Check-in determination device | |
CN104375641A (en) | Control method and electronic device | |
CN103430514A (en) | Semantic information transmitting method, semantic information receiving method, terminal, server and system | |
JP2020057864A (en) | Terminal mounting device and portable terminal | |
JP7246255B2 (en) | Information processing device and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23907149 Country of ref document: EP Kind code of ref document: A1 |