WO2022224402A1 - 位置検知装置、位置検知方法、及び位置検知プログラム - Google Patents
位置検知装置、位置検知方法、及び位置検知プログラム Download PDFInfo
- Publication number
- WO2022224402A1 WO2022224402A1 PCT/JP2021/016290 JP2021016290W WO2022224402A1 WO 2022224402 A1 WO2022224402 A1 WO 2022224402A1 JP 2021016290 W JP2021016290 W JP 2021016290W WO 2022224402 A1 WO2022224402 A1 WO 2022224402A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- coordinates
- map
- dimensional
- person
- position detection
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 131
- 230000009466 transformation Effects 0.000 claims abstract description 10
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000000034 method Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 24
- 238000012544 monitoring process Methods 0.000 description 17
- 238000006243 chemical reaction Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- the present disclosure relates to a position detection device, a position detection method, and a position detection program.
- JP 2017-34511 A (for example, paragraph 0025, FIG. 2)
- the above conventional system has the problem that it requires a large number of surveillance cameras installed to face the same direction in order to detect the position coordinates of the worker.
- An object of the present disclosure is to provide a position detection device, a position detection method, and a position detection program that make it possible to solve the above problems.
- a position detection device receives a plurality of images captured by a plurality of surveillance cameras, performs processing for detecting a person in each of the plurality of images, and generates a plurality of images indicating the detected positions of the person.
- a human detection unit that outputs two-dimensional camera coordinates;
- a coordinate conversion unit that converts the plurality of two-dimensional camera coordinates into a plurality of three-dimensional coordinates represented by a predetermined common coordinate system;
- a map coordinate determining unit that generates two-dimensional map coordinates based on dimensional coordinates;
- a display control unit that acquires a map and outputs video data in which position information of the two-dimensional map coordinates is superimposed on the map. It is characterized by
- Another position detection device of the present disclosure receives an image captured by a surveillance camera, performs processing for detecting a person in the image, and outputs two-dimensional camera coordinates indicating the detected position of the person.
- a detection unit receives an image captured by a surveillance camera, performs processing for detecting a person in the image, and outputs two-dimensional camera coordinates indicating the detected position of the person.
- a detection unit receives the two-dimensional camera coordinates into three-dimensional coordinates represented by a predetermined common coordinate system, and an inertial sensor of the mobile terminal carried by the person.
- a terminal position calculation unit that calculates terminal position coordinates indicating the position of the mobile terminal based on the detected value of the mobile terminal, calculates two-dimensional map coordinates based on the three-dimensional coordinates during a period in which the person is detected, During the period in which the person is not detected, a map coordinate determination unit that calculates the two-dimensional map coordinates based on the terminal position coordinates, acquires a map, and superimposes the position information of the two-dimensional map coordinates on the map. and a display control unit for outputting video data.
- Still another position detection device of the present disclosure receives an image captured by a surveillance camera, performs processing for detecting a person in the image, and outputs two-dimensional camera coordinates indicating the detected position of the person.
- a human detection unit a coordinate conversion unit that converts the two-dimensional camera coordinates into three-dimensional coordinates represented by a predetermined common coordinate system;
- a character recognition unit that recognizes a character string on a nameplate of a device in a wearable camera image captured by a wearable camera, a character search unit that searches the layout drawing of the device for the recognized character string, and the recognized character.
- a map coordinate determination unit that calculates the two-dimensional map coordinates, and a display control unit that acquires the map and outputs video data in which the position information of the two-dimensional map coordinates is superimposed on the map. do.
- FIG. 2 is a diagram showing a plurality of monitoring cameras that are used for position detection by the position detection device according to Embodiment 1;
- (A) and (B) are diagrams showing, on a map, two-dimensional coordinates of people detected based on a plurality of images captured by a plurality of surveillance cameras.
- 1 is a functional block diagram schematically showing the configuration of a position detection device according to Embodiment 1;
- FIG. 2 is a diagram showing the hardware configuration of the position detection device according to Embodiment 1;
- FIG. 4 is a flowchart showing processing of a human detection unit of the position detection device according to Embodiment 1; 4 is a flow chart showing processing of a two-dimensional/three-dimensional coordinate conversion unit of the position detection device according to Embodiment 1; 4 is a flow chart showing processing of a map coordinate determination unit of the position detection device according to Embodiment 1; FIG.
- FIG. 10 is a diagram showing a monitoring camera and a mobile terminal, which are configurations used for position detection by the position detection device according to Embodiment 2;
- A is a diagram showing on a map the two-dimensional coordinates of a person detected based on an image taken by a surveillance camera, and
- B is a diagram showing a person detected by pedestrian autonomous navigation (PDR) It is a figure which shows a two-dimensional coordinate on a map.
- PDR pedestrian autonomous navigation
- FIG. 10 is a diagram showing a monitoring camera and a wearable camera, which are configurations used for position detection of the position detection device according to Embodiment 3; (A) is a diagram showing an example of an instrument panel, instruments, and a nameplate, and (B) is a diagram showing two-dimensional map coordinate
- FIG. 11 is a functional block diagram schematically showing the configuration of a position detection device according to Embodiment 3; 14 is a flowchart showing processing of a character recognition unit of the position detection device according to Embodiment 3; 14 is a flow chart showing processing of a character search unit of the position detection device according to Embodiment 3.
- a position detection device, a position detection method, and a position detection program according to embodiments will be described below with reference to the drawings.
- the following embodiments are merely examples, and the embodiments can be combined as appropriate and each embodiment can be modified as appropriate.
- symbol is attached
- FIG. 1 is a diagram showing a plurality of monitoring cameras 11_1 to 11_n (n is a positive integer) that are used for position detection by a position detection device 10 according to Embodiment 1.
- Surveillance cameras 11_1 to 11_n are installed in a predetermined area 80 .
- Area 80 contains instrument panel 40 and machine 50 .
- the monitoring cameras 11_1 to 11_n are fixed cameras.
- the monitoring cameras 11_1 to 11_n may be rotatable PTZ cameras, but in this case, it is necessary to have a function of notifying the position detection device 10 of camera parameters.
- the monitoring cameras 11_1 to 11_n respectively photograph the photographing ranges R1 to Rn and transmit images I 1 to I n to the position detection device 10 .
- the position detection device 10 calculates the two-dimensional (2D) map coordinates (X, Y) of the person 90 based on the images I 1 to I n and displays the 2D map coordinates (X, Y) on the map 81 of the area 80 .
- the area is, for example, inside a factory. Person 90 is, for example, a worker.
- FIGS. 2A and 2B show the 2D map coordinates (X, Y) of a person 90 detected based on a plurality of images I 1 -I n taken by a plurality of surveillance cameras 11_1-11_n.
- FIG. 2A shows an example in which 2D map coordinates (X, Y) are calculated by three surveillance cameras 11_1, 11_2, and 11_n
- FIG. Figure 3 shows an example in which 2D map coordinates (X,Y) have been calculated.
- a map 81 of the area 80 in FIGS. 2A and 2B is acquired from an external storage device. However, the map 81 may be stored in the storage device within the position detection device 10 .
- FIG. 3 is a functional block diagram schematically showing the configuration of the position detection device 10 according to Embodiment 1.
- the position detection device 10 is a device capable of implementing the position detection method according to the first embodiment.
- the position detection device 10 can implement the position detection method according to the first embodiment by executing the position detection program.
- the position detection device 10 has an image reception section 13 , a human detection section 14 , a coordinate conversion section 15 , a map coordinate determination section 16 and a display control section 17 .
- the image receiving unit 13 receives a plurality of images I 1 to In captured by the plurality of surveillance cameras 11_1 to 11_n and transmitted from the image transmitting units 12_1 to 12_n , and transmits the images I 1 to In to the human detection unit 14. Output.
- the video receiver 13 is also called a communication circuit or communication interface.
- the human detection unit 14 receives the images I 1 to In, performs processing for detecting the person 90 in each of the images I 1 to In, and generates a plurality of two-dimensional (2D) images indicating the position of the detected person 90 . ) Output the camera coordinates (u 1 , v 1 ) to (u n , v n ).
- the coordinate systems of the 2D camera coordinates (u 1 , v 1 ) to (u n , v n ) are different from each other.
- the coordinate transformation unit 15 is a coordinate transformation unit that transforms two-dimensional (2D) coordinates into three-dimensional (3D) coordinates.
- the coordinate transformation unit 15 converts a plurality of 2D camera coordinates (u 1 , v 1 ) to (u n , v n ) into a plurality of 3D coordinates (X 1 , Y 1 ) represented by a predetermined common coordinate system. , Z 1 ) to (X n , Y n , Z n ).
- a common coordinate system is, for example, the world coordinate system.
- a map coordinate determination unit 16 generates 2D map coordinates (X, Y) based on a plurality of 3D coordinates (X 1 , Y 1 , Z 1 ) to (X n , Y n , Z n ).
- the map coordinate determination unit 16 calculates the 2D map coordinates (X, Y) using the arithmetic mean of the coordinate values of a plurality of 3D coordinates (X 1 , Y 1 , Z 1 ) to (X n , Y n , Z n ). calculate.
- the display control unit 17 outputs video data in which the position information of the 2D map coordinates (X, Y) is superimposed on the map 81 of the area 80 .
- the display device 18 displays a map 81 of the area 80 and positional information of 2D map coordinates (X, Y). In the example of FIG. 3, the map 81 is displayed based on map information L acquired from an external storage device.
- FIG. 4 is a diagram showing the hardware configuration of the position detection device 10 according to Embodiment 1.
- the position detection device 10 includes a processor 101 such as a CPU (Central Processing Unit), a memory 102 that is a volatile storage device, and a non-volatile memory such as a hard disk drive (HDD) or solid state drive (SDD). and a communication unit 104 for communicating with the outside.
- the memory 102 is, for example, a volatile semiconductor memory such as a RAM (Random Access Memory).
- the processing circuitry may be dedicated hardware or processor 101 executing a program stored in memory 102 .
- the processor 101 may be any of a processing device, an arithmetic device, a microprocessor, a microcomputer, and a DSP (Digital Signal Processor).
- the processing circuit may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) ), or a combination of any of these.
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- the position detection program is realized by software, firmware, or a combination of software and firmware.
- Software and firmware are written as programs and stored in memory 102 .
- the processor 101 reads out and executes the position detection program stored in the memory 102, thereby implementing the functions of the units shown in FIG.
- position detection device 10 may be partially realized by dedicated hardware and partially realized by software or firmware.
- the processing circuitry may implement each of the functions described above in hardware, software, firmware, or any combination thereof.
- FIG. 5 is a flow chart showing processing of the human detection unit 14 of the position detection device 10 according to the first embodiment.
- the human detection unit 14 first receives the images I 1 to I n (step S11).
- the human detection unit 14 sequentially moves the detection window 92 in the image 91 of each frame to perform processing for detecting the person 90 (steps S12 to S14).
- the human detection unit 14 sequentially moves the detection window 92 in the image 91 of each frame, and HOG (Histograms), which is a feature amount obtained by histogramming the gradient direction of luminance (color, brightness) in a local region.
- HOG Heistograms
- FIG. 6 is a flow chart showing processing of the coordinate conversion unit 15 of the position detection device 10 according to the first embodiment.
- the coordinate conversion unit 15 receives the 2D camera coordinates (u, v) of the person 90 on the captured image from the person detection unit 14 (step S21), and converts [R
- inverse transformation of the perspective projection is performed by the following equation (1) (step S24).
- the coordinate conversion unit 15 outputs the 3D coordinates (X, Y, Z) of the person 90.
- FIG. 7 is a flow chart showing processing of the map coordinate determination unit 16 of the position detection device 10 according to the first embodiment.
- the map coordinate determining unit 16 determines 3D coordinates (X 1 , Y 1 , Z 1 ) based on the image of the surveillance camera #1, 3D coordinates (X 2 , Y 2 , Z 2 ) based on the image of the surveillance camera #2, . 3D coordinates (X n , Y n , Z n ) based on the image of camera #n are obtained (steps S31 to S33).
- the map coordinate determining unit 16 determines the number of coordinates (that is, the count value M ) are counted.
- the map coordinate determination unit 16 uses the weighted average calculation formula shown below from the values of the 3D coordinates (X 1 , Y 1 , Z 1 ) to (X n , Y n , Z n ). to output the value of the 2D map coordinates (X, Y).
- the map coordinate determination unit 16 outputs the values of the 2D map coordinates (X, Y).
- the position detection device 10 the position detection method, or the position detection program according to the first embodiment, the images I 1 to I n of the plurality of surveillance cameras 11_1 to 11_n having different positions and orientations can be captured. Based on this, the position of the person 90 can be detected.
- the camera parameters of the plurality of surveillance cameras 11_1 to 11_n do not need to be common, it is possible to accurately detect the position of a person using the images of existing surveillance cameras.
- FIG. 8 is a diagram showing monitoring cameras 11_1 to 11_n and a mobile terminal 21, which are configurations used for position detection by the position detection device 20 according to the second embodiment.
- the mobile terminal 21 is carried by a person 90 to be detected.
- One or more surveillance cameras 11_1-11_n are installed in a predetermined area 80 where the instrument panel 40 and the machine 50 are located.
- the surveillance cameras 11_1 to 11_n may be existing cameras.
- the monitoring cameras 11_1 to 11_n capture images of the imaging ranges R1 to Rn, respectively, and transmit images I 1 to I n to the position detection device 20 .
- the position detection device 20 calculates the 2D map coordinates (X, Y) of the person 90 based on the images I 1 to I n and displays information indicating the 2D map coordinates (X, Y) on the map 81 of the area 80 . , to generate video data to be displayed on the display device 18 .
- FIG. 9A is a diagram showing the 2D map coordinates (X, Y) of a person 90 detected based on the images I 1 to In taken by the monitoring cameras 11_1 to 11_n on the map 81 of the area 80.
- FIG. 9B is a diagram showing the 2D map coordinates (X, Y) based on the 2D coordinates (Xp, Yp) of the person 90 detected by pedestrian dead reckoning (PDR) on the map 81 of the area 80.
- FIG. 9A shows an example in which 2D map coordinates (X, Y) are calculated by two monitoring cameras 11_1 and 11_n.
- FIG. 9B shows the 2D map coordinates (X, Y) obtained as a result of the position calculation performed by the PDR after the person 90 has moved out of the imaging range of the surveillance camera.
- FIG. 10 is a functional block diagram schematically showing the configuration of the position detection device 20 according to the second embodiment.
- the position detection device 20 is a device capable of implementing the position detection method according to the second embodiment.
- the position detection device 20 can implement the position detection method according to the second embodiment by executing the position detection program.
- the position detection device 20 includes a video reception unit 13, a human detection unit 14, a coordinate conversion unit 15, a detection value reception unit 23, a terminal position calculation unit 24, and a map coordinate determination unit. 16 a and a display control unit 17 .
- the hardware configuration of the position detection device 20 is the same as that of FIG.
- the image receiving unit 13 receives the image I1 captured by one or more monitoring cameras 11_1 and transmitted from the image transmitting unit 12_1 and passes the image I1 to the human detection unit 14 .
- the human detection unit 14 receives the image I 1 , performs processing for detecting the person 90 in the image I 1 , and outputs 2D camera coordinates (u 1 , v 1 ) indicating the position of the detected person 90 .
- the coordinate conversion unit 15 is a 2D/3D coordinate conversion unit.
- the coordinate transformation unit 15 transforms the 2D camera coordinates (u 1 , v 1 ) into 3D coordinates (X 1 , Y 1 , Z 1 ) represented by a predetermined common coordinate system.
- the detected value receiving unit 23 receives the detected value, which is the sensor value of the inertial sensor 21 a of the mobile terminal 21 carried by the person 90 , and outputs the detected value to the terminal position calculation unit 24 .
- the inertial sensor 21a is, for example, a device capable of detecting translational motion in three orthogonal axial directions and rotational motion.
- an inertial sensor is a device that detects translational motion with an acceleration sensor [m/s 2 ] and detects rotational motion with an angular velocity (gyro) sensor [deg/sec].
- the terminal position calculator 24 calculates terminal position coordinates (X p , Y p ) indicating the position of the mobile terminal 21 .
- the map coordinate determination unit 16a calculates the 2D map coordinates (X, Y) based on the 3D coordinates while the person 90 is detected, and calculates the terminal position coordinates (X p , Y p ) to calculate the 2D map coordinates (X, Y).
- the display control unit 17 outputs video data in which the position information of the 2D map coordinates (X, Y) is superimposed on the map 81 of the area 80 .
- the display device 18 displays a map 81 of the area 80 and positional information of 2D map coordinates (X, Y).
- the map 81 is displayed based on map information L obtained from an external storage device.
- FIG. 11 is a flow chart showing processing of the terminal position calculation unit 24 of the position detection device 20 according to the second embodiment.
- the terminal position calculator 24 calculates a rotation matrix from the detected values (step S42), transforms the attitude of the mobile terminal 21 (step S43), and calculates the acceleration of the mobile terminal 21 (step S44).
- the displacement is calculated by double integrating the acceleration (step S45), and the terminal position coordinates (X p , Y p ) are output based on this displacement (step S45).
- the terminal position calculation unit 24 repeats the above processing (steps S42 to S46), for example, until position detection by the monitoring camera is restarted (step S41).
- FIG. 12 is a flow chart showing processing of the map coordinate determination unit 16a of the position detection device 20 according to the second embodiment.
- the map coordinate determination unit 16a receives the image, and if the person 90 is detected (YES in step S53), outputs 2D map coordinates (X, Y) based on the image (step S54). ), if no person is detected (NO in step S53), the terminal position coordinates (X p , Y p ) obtained by the PDR based on the detection value of the inertial sensor 21a are output as the 2D map coordinates (X, Y). (step S55).
- 2D map coordinates (X, Y) are output by a relatively accurate method based on the image I1, etc., and when the person 90 is outside the shooting range, the terminal position coordinates calculated based on the detection value of the inertial sensor 21a can be output as 2D map coordinates (X,Y).
- the PDR calculation result is compared with the past calculation result, and if there is an error of a predetermined value or more, taking measures such as notifying that the accuracy is low, it is possible to use the PDR with relatively low accuracy. Disadvantages of position detection can be reduced.
- FIG. 13 is a diagram showing monitoring cameras 11_1 to 11_n and a wearable camera 31 that are used for position detection by the position detection device 30 according to the third embodiment.
- the wearable camera 31 is a small camera that captures the line-of-sight direction of the worker, who is the person 90, and is also called smart glasses. Wearable camera 31 is carried by person 90 .
- One or more surveillance cameras 11_1-11_n are installed in a predetermined area 80 where the instrument panel 40 and the machine 50 are located.
- the surveillance cameras 11_1 to 11_n may be existing cameras.
- the monitoring cameras 11_1 to 11_n respectively photograph the photographing ranges R1 to Rn and transmit images I 1 to I n to the position detection device 10 .
- the position detection device 10 calculates the 2D map coordinates (X, Y) of the person 90 based on the images I 1 to I n and displays information indicating the 2D map coordinates (X, Y) on the map 81 of the area 80. , to generate video information to be displayed on a display device.
- FIG. 14(A) is a diagram showing an example of an instrument panel 40, devices 41 such as instruments, and a nameplate 42
- FIG. 3 is a diagram showing map coordinates (X, Y) on a map 81 of an area 80;
- FIG. 15 is a functional block diagram schematically showing the configuration of the position detection device 30 according to the third embodiment.
- the position detection device 30 is a device capable of implementing the position detection method according to the third embodiment.
- the position detection device 30 can implement the position detection method by executing a position detection program.
- the position detection device 30 includes an image reception unit 13, a human detection unit 14, a coordinate conversion unit 15, an image reception unit 33, a character recognition unit 34, a character search unit 35, It has a map coordinate determination unit 16 b and a display control unit 17 .
- the hardware configuration of the position detection device 30 is the same as that of FIG.
- the human detection unit 14 receives the image I1 captured by the surveillance camera 11_1 , performs processing for detecting the person 90 in the image I1, and obtains 2D camera coordinates (u 1 , v 1 ).
- a coordinate transformation unit (15) transforms the 2D camera coordinates into 3D coordinates (X 1 , Y 1 , Z 1 ) represented by a predetermined common coordinate system.
- the character recognition unit 34 recognizes the character string (43) on the nameplate 42 of the device 41 in the wearable camera image Iw captured by the wearable camera 31 when the person 90 is wearing the wearable camera 31. .
- the character search unit 35 searches for the recognized character string 43 in the equipment layout drawing in the area 80 (for example, the map 81 describing the equipment layout).
- the map coordinate determination unit 16b determines 2D map coordinates (X, Y) based on the found position, and the recognized character string 43 is added to the layout drawing. If not found in , compute the 2D map coordinates (X,Y) based on the 3D coordinates.
- the display control unit 17 outputs video data in which the position information of the 2D map coordinates (X, Y) is superimposed on the map 81 of the area 80 .
- FIG. 16 is a flow chart showing processing of the character recognition unit 34 of the position detection device 30 according to the third embodiment.
- the character recognition unit 34 detects a character string area (step S61), divides the character string area into areas of one character (step S62), compares character patterns (step S63), and determines one character (step S63). S64), it is determined whether or not there is the next character (step S65). If there is a next character (YES in step S65), character recognition section 34 repeats steps S63 to S65. If there is no next character (NO in step S65), character recognition unit 34 outputs the character string (step S66).
- FIG. 17 is a flow chart showing processing of the character search section 35 of the position detection device 30 according to the third embodiment.
- the character search unit 35 acquires the layout drawing of the map 81 of the area 80 (step S71), acquires a character string (step S72), and searches the layout drawing for a character string that matches the character string (step S73). If there is a matching character string (YES in step S74), character search unit 35 converts the character string into 2D coordinates (step S75) and outputs 2D map coordinates (X, Y) indicating the position of the person. do. If there is no matching character string (NO in step S74), character search unit 35 terminates the character search.
- the position detection device 30 As described above, if the position detection device 30, the position detection method, or the position detection program according to the third embodiment is used, if the character string 43 of the nameplate 42 is found based on the image of the wearable camera 31, the The position of the nameplate 42 is output as the 2D map coordinates (X, Y), and if the character string 43 on the nameplate 42 is not found, the position of the person 90 is converted to the 2D map coordinates (X, Y) based on the surveillance camera image. output as Such control can improve the accuracy of position detection. That is, since the position of the nameplate 42 is preferentially used for position detection from the calculation result of the photographed image, the position detection accuracy can be improved.
- the coordinates can be calculated based on the image of the wearable camera 31 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
図1は、実施の形態1に係る位置検知装置10の位置検知に用いられる構成である複数の監視カメラ11_1~11_n(nは正の整数)を示す図である。監視カメラ11_1~11_nは、予め定められたエリア80に設置されている。エリア80には、計器盤40及び機械50が置かれている。監視カメラ11_1~11_nは、固定カメラである。監視カメラ11_1~11_nは、旋回可能なPTZカメラであってもよいが、この場合には、カメラパラメータを位置検知装置10に通知する機能を備える必要がある。監視カメラ11_1~11_nは、撮影範囲R1~Rnをそれぞれ撮影して、映像I1~Inを位置検知装置10に送信する。位置検知装置10は、映像I1~Inに基づいて、人90の2次元(2D)マップ座標(X,Y)を計算して、エリア80のマップ81に2Dマップ座標(X,Y)を示す情報を、表示装置に表示させるための映像データを生成する。エリアは、例えば、工場内である。人90は、例えば、作業者である。
図8は、実施の形態2に係る位置検知装置20の位置検知に用いられる構成である監視カメラ11_1~11_nとモバイル端末21とを示す図である。モバイル端末21は、検知対象である人90によって携行される。1台以上の監視カメラ11_1~11_nは、計器盤40及び機械50が置かれた、予め定められたエリア80に設置されている。監視カメラ11_1~11_nは、既存のカメラであってもよい。監視カメラ11_1~11_nは、撮影範囲R1~Rnをそれぞれ撮影して、映像I1~Inを位置検知装置20に送信する。位置検知装置20は、映像I1~Inに基づいて、人90の2Dマップ座標(X,Y)を計算して、エリア80のマップ81に2Dマップ座標(X,Y)を示す情報を、表示装置18に表示させるための映像データを生成する。
図13は、実施の形態3に係る位置検知装置30の位置検知に用いられる構成である監視カメラ11_1~11_nとウェラブルカメラ31とを示す図である。ウェラブルカメラ31は、人90である作業者の視線方向を撮影する小型のカメラであり、スマートグラスとも呼ばれる。ウェラブルカメラ31は、人90によって携行される。1台以上の監視カメラ11_1~11_nは、計器盤40及び機械50が置かれた、予め定められたエリア80に設置されている。監視カメラ11_1~11_nは、既存のカメラであってもよい。監視カメラ11_1~11_nは、撮影範囲R1~Rnをそれぞれ撮影して、映像I1~Inを位置検知装置10に送信する。位置検知装置10は、映像I1~Inに基づいて、人90の2Dマップ座標(X,Y)を計算して、エリア80のマップ81に2Dマップ座標(X,Y)を示す情報を、表示装置に表示させるための映像情報を生成する。
Claims (13)
- 複数の監視カメラで撮影された複数の映像を受け取り、前記複数の映像のそれぞれにおいて人を検知するための処理を行い、検知された前記人の位置を示す複数の2次元カメラ座標を出力する人検知部と、
前記複数の2次元カメラ座標を、予め定められた共通の座標系で表される複数の3次元座標に変換する座標変換部と、
前記複数の3次元座標に基づいて2次元マップ座標を生成するマップ座標決定部と、
マップを取得し、前記マップ上に前記2次元マップ座標の位置情報を重ねた映像データを出力する表示制御部と、
を有することを特徴とする位置検知装置。 - 前記マップ座標決定部は、前記複数の3次元座標の座標値の加算平均を用いて前記2次元マップ座標を算出する
ことを特徴とする請求項1に記載の位置検知装置。 - 前記人が携行しているモバイル端末の慣性センサの検出値に基づいて前記モバイル端末の位置を示す端末位置座標を計算する端末位置計算部を更に有し、
前記マップ座標決定部は、
前記人が検知されている期間は、前記複数の3次元座標に基づいて前記2次元マップ座標を算出し、
前記人が検知されていない期間は、前記端末位置座標に基づいて前記2次元マップ座標を算出する、
ことを特徴とする請求項1又は2に記載の位置検知装置。 - 前記端末位置計算部は、歩行者自律航法を用いて前記端末位置座標を計算する
ことを特徴とする請求項3に記載の位置検知装置。 - 前記人が装着しているウェラブルカメラによって撮影されたウェラブルカメラ映像内の機器の銘板の文字列を認識する文字認識部と、
認識された前記文字列を前記機器のレイアウト図面で検索する文字検索部と、
を更に有し、
前記マップ座標決定部は、
前記文字列が前記レイアウト図面で見つかった場合には、見つかった位置に基づいて前記2次元マップ座標を決定し、
認識された前記文字列が前記レイアウト図面で見つからない場合には、前記複数の3次元座標に基づいて前記2次元マップ座標を生成する、
ことを特徴とする請求項1又は2に記載の位置検知装置。 - 監視カメラで撮影された映像を受け取り、前記映像において人を検知するための処理を行い、検知された前記人の位置を示す2次元カメラ座標を出力する人検知部と、
前記2次元カメラ座標を、予め定められた共通の座標系で表される3次元座標に変換する座標変換部と、
前記人がモバイル端末を携行している前記モバイル端末の慣性センサの検出値に基づいて前記モバイル端末の位置を示す端末位置座標を計算する端末位置計算部と、
前記人が検知されている期間は、前記3次元座標に基づいて2次元マップ座標を算出し、前記人が検知されていない期間は、前記端末位置座標に基づいて前記2次元マップ座標を算出するマップ座標決定部と、
マップを取得し、前記マップ上に前記2次元マップ座標の位置情報を重ねた映像データを出力する表示制御部と、
を有することを特徴とする位置検知装置。 - 前記端末位置計算部は、歩行者自律航法を用いて前記端末位置座標を計算する
ことを特徴とする請求項6に記載の位置検知装置。 - 監視カメラで撮影された映像を受け取り、前記映像において人を検知するための処理を行い、検知された前記人の位置を示す2次元カメラ座標を出力する人検知部と、
前記2次元カメラ座標を、予め定められた共通の座標系で表される3次元座標に変換する座標変換部と、
前記人がウェラブルカメラを装着しているときに前記ウェラブルカメラによって撮影されたウェラブルカメラ映像内の機器の銘板の文字列を認識する文字認識部と、
認識された前記文字列を前記機器のレイアウト図面で検索する文字検索部と、
認識された前記文字列が前記レイアウト図面で見つかった場合には、見つかった位置に基づいて2次元マップ座標を決定し、認識された前記文字列が前記レイアウト図面で見つからない場合には、前記3次元座標に基づいて前記2次元マップ座標を計算するマップ座標決定部と、
マップを取得し、前記マップ上に前記2次元マップ座標の位置情報を重ねた映像データを出力する表示制御部と、
を有することを特徴とする位置検知装置。 - 前記マップ座標決定部は、前記監視カメラに前記人が映っていないと判断した場合に、前記監視カメラの撮影範囲に移動する旨のメッセージを前記人に送信する
ことを特徴とする請求項1から8のいずれか1項に記載の位置検知装置。 - 位置検知装置によって実行される位置検知方法であって、
複数の監視カメラで撮影された複数の映像を受け取り、前記複数の映像のそれぞれにおいて人を検知するための処理を行い、検知された前記人の位置を示す複数の2次元カメラ座標を出力するステップと、
前記複数の2次元カメラ座標を、予め定められた共通の座標系で表される複数の3次元座標に変換するステップと、
前記複数の3次元座標に基づいて2次元マップ座標を生成するステップと、
マップを取得し、前記マップ上に前記2次元マップ座標の位置情報を重ねた映像データを出力するステップと、
を有することを特徴とする位置検知方法。 - 位置検知装置によって実行される位置検知方法であって、
監視カメラで撮影された映像を受け取り、前記映像において人を検知するための処理を行い、検知された前記人の位置を示す2次元カメラ座標を出力するステップと、
前記2次元カメラ座標を、予め定められた共通の座標系で表される3次元座標に変換するステップと、
前記人がモバイル端末を携行している前記モバイル端末の慣性センサの検出値に基づいて前記モバイル端末の位置を示す端末位置座標を計算するステップと、
前記人が検知されている期間は、前記3次元座標に基づいて2次元マップ座標を算出し、前記人が検知されていない期間は、前記端末位置座標に基づいて前記2次元マップ座標を算出するステップと、
マップを取得し、前記マップ上に前記2次元マップ座標の位置情報を重ねた映像データを出力するステップと、
を有することを特徴とする位置検知方法。 - 位置検知装置によって実行される位置検知方法であって、
監視カメラで撮影された映像を受け取り、前記映像において人を検知するための処理を行い、検知された前記人の位置を示す2次元カメラ座標を出力するステップと、
前記2次元カメラ座標を、予め定められた共通の座標系で表される3次元座標に変換するステップと、
前記人がウェラブルカメラを装着しているときに前記ウェラブルカメラによって撮影されたウェラブルカメラ映像内の機器の銘板の文字列を認識するステップと、
認識された前記文字列と前記機器のレイアウト図面に含まれる文字列との照査の結果に基づいて、前記人の位置を検索するステップと、
認識された前記文字列が前記レイアウト図面で見つかった場合には、見つかった位置に基づいて2次元マップ座標を決定し、認識された前記文字列が前記レイアウト図面で見つからない場合には、前記3次元座標に基づいて前記2次元マップ座標を計算するステップと、
マップを取得し、前記マップ上に前記2次元マップ座標の位置情報を重ねた映像データを出力するステップと、
を有することを特徴とする位置検知方法。 - 請求項10から12のいずれか1項に記載の位置検知方法をコンピュータに実行させる位置検知プログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023515973A JP7309097B2 (ja) | 2021-04-22 | 2021-04-22 | 位置検知装置、位置検知方法、及び位置検知プログラム |
KR1020237034470A KR20230152146A (ko) | 2021-04-22 | 2021-04-22 | 위치 검지 장치, 위치 검지 방법, 및 위치 검지 프로그램 |
CN202180097069.9A CN117121471A (zh) | 2021-04-22 | 2021-04-22 | 位置检测装置、位置检测方法和位置检测程序 |
DE112021007115.7T DE112021007115T5 (de) | 2021-04-22 | 2021-04-22 | Positionserfassungsvorrichtung, positionserfassungsverfahren und positionserfassungsprogramm |
PCT/JP2021/016290 WO2022224402A1 (ja) | 2021-04-22 | 2021-04-22 | 位置検知装置、位置検知方法、及び位置検知プログラム |
US18/376,865 US20240037779A1 (en) | 2021-04-22 | 2023-10-05 | Position detection device, position detection method, and storage medium storing position detection program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/016290 WO2022224402A1 (ja) | 2021-04-22 | 2021-04-22 | 位置検知装置、位置検知方法、及び位置検知プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/376,865 Continuation US20240037779A1 (en) | 2021-04-22 | 2023-10-05 | Position detection device, position detection method, and storage medium storing position detection program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022224402A1 true WO2022224402A1 (ja) | 2022-10-27 |
Family
ID=83722213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/016290 WO2022224402A1 (ja) | 2021-04-22 | 2021-04-22 | 位置検知装置、位置検知方法、及び位置検知プログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240037779A1 (ja) |
JP (1) | JP7309097B2 (ja) |
KR (1) | KR20230152146A (ja) |
CN (1) | CN117121471A (ja) |
DE (1) | DE112021007115T5 (ja) |
WO (1) | WO2022224402A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005076621A1 (ja) * | 2004-02-03 | 2005-08-18 | Matsushita Electric Industrial Co., Ltd. | 監視システムおよびカメラ端末 |
WO2009110417A1 (ja) * | 2008-03-03 | 2009-09-11 | ティーオーエー株式会社 | 旋回型カメラの設置条件特定装置および方法ならびに当該設置条件特定装置を備えるカメラ制御システム |
JP2013003863A (ja) * | 2011-06-17 | 2013-01-07 | Hitachi Engineering & Services Co Ltd | 現場機器誤操作防止方法および現場機器誤操作防止システム |
JP2014146247A (ja) * | 2013-01-30 | 2014-08-14 | Secom Co Ltd | 物体特徴抽出装置、物体領域抽出装置及び物体追跡装置 |
JP2016219990A (ja) * | 2015-05-19 | 2016-12-22 | キヤノン株式会社 | 位置推定システム、位置推定方法及びプログラム |
WO2019167269A1 (ja) * | 2018-03-02 | 2019-09-06 | 三菱電機株式会社 | 動態検出装置 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017034511A (ja) | 2015-08-03 | 2017-02-09 | 株式会社ブイ・アール・テクノセンター | 移動体検出システム |
CN110851557A (zh) | 2019-11-28 | 2020-02-28 | 云南电网有限责任公司电力科学研究院 | 一种配电网gis数据采集系统及方法 |
-
2021
- 2021-04-22 CN CN202180097069.9A patent/CN117121471A/zh active Pending
- 2021-04-22 KR KR1020237034470A patent/KR20230152146A/ko active IP Right Grant
- 2021-04-22 WO PCT/JP2021/016290 patent/WO2022224402A1/ja active Application Filing
- 2021-04-22 DE DE112021007115.7T patent/DE112021007115T5/de active Pending
- 2021-04-22 JP JP2023515973A patent/JP7309097B2/ja active Active
-
2023
- 2023-10-05 US US18/376,865 patent/US20240037779A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005076621A1 (ja) * | 2004-02-03 | 2005-08-18 | Matsushita Electric Industrial Co., Ltd. | 監視システムおよびカメラ端末 |
WO2009110417A1 (ja) * | 2008-03-03 | 2009-09-11 | ティーオーエー株式会社 | 旋回型カメラの設置条件特定装置および方法ならびに当該設置条件特定装置を備えるカメラ制御システム |
JP2013003863A (ja) * | 2011-06-17 | 2013-01-07 | Hitachi Engineering & Services Co Ltd | 現場機器誤操作防止方法および現場機器誤操作防止システム |
JP2014146247A (ja) * | 2013-01-30 | 2014-08-14 | Secom Co Ltd | 物体特徴抽出装置、物体領域抽出装置及び物体追跡装置 |
JP2016219990A (ja) * | 2015-05-19 | 2016-12-22 | キヤノン株式会社 | 位置推定システム、位置推定方法及びプログラム |
WO2019167269A1 (ja) * | 2018-03-02 | 2019-09-06 | 三菱電機株式会社 | 動態検出装置 |
Also Published As
Publication number | Publication date |
---|---|
US20240037779A1 (en) | 2024-02-01 |
JP7309097B2 (ja) | 2023-07-14 |
DE112021007115T5 (de) | 2024-02-15 |
KR20230152146A (ko) | 2023-11-02 |
JPWO2022224402A1 (ja) | 2022-10-27 |
CN117121471A (zh) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12067784B2 (en) | Surveillance information generation apparatus, imaging direction estimation apparatus, surveillance information generation method, imaging direction estimation method, and program | |
Rambach et al. | Learning to fuse: A deep learning approach to visual-inertial camera pose estimation | |
US20160117824A1 (en) | Posture estimation method and robot | |
US20040090444A1 (en) | Image processing device and method therefor and program codes, storing medium | |
JP2012075060A (ja) | 画像処理装置及びそれを用いた撮像装置 | |
JP2008204384A (ja) | 撮像装置、物体検出方法及び姿勢パラメータの算出方法 | |
KR101125233B1 (ko) | 융합기술기반 보안방법 및 융합기술기반 보안시스템 | |
JP2017036970A (ja) | 情報処理装置、情報処理方法、プログラム | |
KR20120108256A (ko) | 로봇 물고기 위치 인식 시스템 및 로봇 물고기 위치 인식 방법 | |
Huttunen et al. | A monocular camera gyroscope | |
CN112087728B (zh) | 获取Wi-Fi指纹空间分布的方法、装置和电子设备 | |
CN113610702A (zh) | 一种建图方法、装置、电子设备及存储介质 | |
JP7309097B2 (ja) | 位置検知装置、位置検知方法、及び位置検知プログラム | |
US20220084244A1 (en) | Information processing apparatus, information processing method, and program | |
JP7562398B2 (ja) | 位置管理システム | |
JP5230354B2 (ja) | 位置特定装置及び異動建物検出装置 | |
CN116576866B (zh) | 导航方法和设备 | |
JP7444292B2 (ja) | 検出システム、検出方法、及びプログラム | |
US20230134912A1 (en) | Information processing device, information processing system, information processing method, and recording medium | |
WO2022014361A1 (ja) | 情報処理装置、情報処理方法、及び、プログラム | |
JP5178905B2 (ja) | 撮像装置、物体検出方法及び姿勢パラメータの算出方法 | |
JP4027294B2 (ja) | 移動体検出装置、移動体検出方法及び移動体検出プログラム | |
EP4292777A1 (en) | Assistance system, image processing device, assistance method and program | |
JP2017215263A (ja) | 位置検出方法及び位置検出システム | |
TW202429389A (zh) | 資訊顯示方法及其處理裝置與資訊顯示系統 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21937895 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023515973 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202347063651 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 20237034470 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020237034470 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112021007115 Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202307949W Country of ref document: SG |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21937895 Country of ref document: EP Kind code of ref document: A1 |