WO2019153855A1 - Système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position, et application de celui-ci - Google Patents

Système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position, et application de celui-ci Download PDF

Info

Publication number
WO2019153855A1
WO2019153855A1 PCT/CN2018/118924 CN2018118924W WO2019153855A1 WO 2019153855 A1 WO2019153855 A1 WO 2019153855A1 CN 2018118924 W CN2018118924 W CN 2018118924W WO 2019153855 A1 WO2019153855 A1 WO 2019153855A1
Authority
WO
WIPO (PCT)
Prior art keywords
fisheye
unit
information
orientation
identifiable
Prior art date
Application number
PCT/CN2018/118924
Other languages
English (en)
Chinese (zh)
Inventor
罗镇邦
Original Assignee
迎刃而解有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 迎刃而解有限公司 filed Critical 迎刃而解有限公司
Publication of WO2019153855A1 publication Critical patent/WO2019153855A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • the present invention relates to the field of optical positioning and intelligent information acquisition and processing, and in particular to a 360 degree surround orientation and position sensing object information acquiring system and application thereof.
  • the traditional mobile positioning methods are mainly as follows:
  • GPS positioning uses electromagnetic wave signals, and its positioning accuracy is often affected by factors such as building blockage, indoor or underground shielding, and multiple channels formed by reflection of buildings or walls. Therefore, it is only suitable for outdoor positioning, and its positioning accuracy is measured in meters (meters). In urban environments where buildings are mostly 10 meters or less, users often get inaccurate maps by inaccurate GPS positions. The location is misleading.
  • WI-FI positioning has higher accuracy than cellular network positioning. It uses the wireless access point and measures the strength of the signal received from one or more network forming ends to determine the position of the object under test.
  • WI-FI positioning is very expensive and it can only achieve a positioning accuracy of 5-10 m when it has a large number/high density of WI-FI access points.
  • Ordinary laser/ultrasonic range finder also has applications in ranging, but it is only used for ranging and does not provide information on the position and direction of the object to be measured and the laser itself has a harmful effect on the human eye. Therefore, conventional laser and ultrasonic ranging do not provide 360-degree surround position and direction information.
  • Image as a real-time effective carrier for carrying position information, has unique advantages in real-time collection of environmental information. It does not have electromagnetic signal interference, positioning accuracy and other aspects, and the image can be collected in real time.
  • the existing image recognition technology is far from being able to accurately recognize the orientation information of the positioning object only by the single-shot picture.
  • the two-dimensional code has been obtained by scanning, so that the network URL is associated with the application of the corresponding payment function, but the image recognition and scanning code technology is not applied to the 360-degree surrounding position and orientation. Use.
  • the positioning accuracy needs to be higher to achieve its function, and the existing several positioning methods have a large gap between the accuracy and the actual demand.
  • RF-based (eg Wi-Fi) positioning technology can transmit RF signals through their devices to interfere with RF positioning signals, thereby Destroy or attack a hidden intruder (such as in a bathroom in a mall). This problem of hidden intruders can be avoided.
  • the present invention provides a non-RF positioning method in which an intruder must block the line of sight between the device and the observed object, and the behavior of the intruder can be recorded by the camera of the device or closed in the environment. TV security monitoring.
  • the first object of the present invention is to provide a 360-degree surround orientation and position sensing object information acquisition system based on 360-degree digitally photographed fisheye images (A 360-degree surround direction and position aware object information retrieval System based on 360-degree fisheye digital camera images), the 360-degree surrounding azimuth and position sensing object information acquiring system includes a fisheye image capturing unit, an object detecting unit, an object orientation computing unit, a system positioning operation unit, and an object information acquiring unit,
  • the fisheye image capturing unit is configured to capture the surrounding environment to be recognized to obtain an overall fisheye image of the environment to be identified;
  • the object detecting unit is configured to detect at least one identifiable object in the environment to be identified; object information
  • the obtaining unit obtains information about the object from the network (for example, the Internet), such as the global location of the object, the environment map and location of the object, or other information; the object orientation computing unit is configured to calculate each identifiable object
  • a 360-degree orientation relative to the fisheye camera the system positioning operation unit is configured to calculate based on the triangulation method according to the orientation information of the at least two different identifiable objects provided by the object orientation operation unit with respect to the fisheye camera 360 degree spatial distance and position of the object relative to the system
  • information acquisition unit from an object or is embedded in the object for example, two-dimensional code is embedded in the label
  • object or environment of global map position to achieve absolute positioning operation of the system of global environment or location.
  • a second object of the present invention is to provide an intelligent telematics communication system, and more particularly to an intelligent remote shopping system using a 360 degree surround orientation and position sensing object information acquisition system.
  • a third object of the present invention is to provide an intelligent car navigation system.
  • a fourth object of the present invention is to provide an intelligent guide blind. Furthermore, high-density, large-scale RF devices currently used by many indoor positioning systems, such as Wi-Fi access points, may also have an impact on people's physical health. The invention significantly avoids the harm to human physiological health through the optical positioning method.
  • a 360-degree circumferential orientation and position sensing object information acquiring system based on a 360-degree digital fisheye image of the present invention, and a 360-degree fisheye image obtained by each fisheye observation point of the fisheye image capturing unit is The image processing method performs a 360-degree surrounding object detection on the overall environment surrounding the fisheye observation point of the system to sense the 360-degree orientation of the object and the 360-degree spatial distance and position of the perceived object, 360-degree circumferential orientation and position sensing object information.
  • the acquisition system includes:
  • the fisheye image capturing unit uses at least one fisheye digital camera to photograph the surrounding environment to be recognized, and the optical axis of the fisheye digital camera is perpendicular to the viewing plane (plane) Of direction of view, or view plane), to obtain a 360-degree surrounding overall fisheye image of the environment to be identified;
  • the object detecting unit receives the output image of the fisheye image capturing unit, and detects at least one identifiable object from the overall fisheye image according to different object categories;
  • the object information acquiring unit is configured to: for the identifiable object, if the object type is a label identifiable object, the image of the label object is partially scanned/identified to extract the included object information, including but not limited to the object information.
  • a network eg, the Internet
  • the object orientation calculation unit receives the detection result of the object detection unit, and is used to determine the correspondence of each identifiable object in the overall fisheye image for each identifiable object detected by the object detection unit. Positioning points and calculating a 360 degree orientation of each identifiable object relative to the fisheye camera in the environment to be identified based on the imaging parameters of the fisheye camera;
  • the system positioning operation unit receives the operation result of the object orientation operation unit, and is configured to calculate the direction information of the at least two different identifiable objects provided by the object orientation operation unit relative to the fisheye camera, based on the triangulation method
  • the 360-degree spatial distance and position of the object relative to the system, and the global or environmental map of the identifiable object obtained from the object information acquisition unit or the information contained in the object (eg, included in the QR code label) Absolute position, the exact location of the system's global or environmental map is calculated from the objects in the surrounding observation environment that provide location information.
  • the tag-like identifiable object is a specially customized identifiable tag pattern; the object information acquiring unit obtains a network (eg, Internet) channel bound to the identifiable object by locally scanning the object code on the identifiable tag pattern, and passes Access network (eg internet) channels to get object information.
  • a network eg, Internet
  • Access network eg internet
  • the non-label type identifiable object is a specific object contour pattern preset by the system, and the object detecting unit performs object detection of the entire image by an artificial intelligence image service built in or through a network (for example, the Internet).
  • a network for example, the Internet
  • the identifiable label pattern is a high contrast double square pattern
  • the object code is a two-dimensional code for obtaining a paste of an object network (eg, the Internet) channel or printed in a double square pattern.
  • a Morse Code encoding the basic information of the identifiable object is overlaid on the double square pattern.
  • the method further comprises:
  • the optical telescope unit the optical telescope unit is connected to the system positioning operation unit for digital shooting, and is used for automatically aiming and/or focusing and digitally capturing the identifiable object according to the relative positioning information provided by the system positioning operation unit.
  • the method further comprises:
  • An object information output unit is connected to the system positioning operation unit and/or the object information acquisition unit, and the object information output unit comprises a display unit and/or a voice unit;
  • the display unit is configured to visually display at least one of direction information, positioning information, and object information of the identifiable object; the voice unit is configured to audibly broadcast at least one of direction information, positioning information, and object information of the identifiable object Kind.
  • the number of fisheye cameras is one, which is configured to perform photographing at at least two different positions, so that the object orientation computing unit can provide at least two different sets of cameras relative to the fisheye.
  • Direction information is one, which is configured to perform photographing at at least two different positions, so that the object orientation computing unit can provide at least two different sets of cameras relative to the fisheye.
  • the number of fisheye cameras is two or more positions that are relatively fixed in position, so that the object orientation arithmetic unit can provide at least two sets of different orientation information with respect to the fisheye camera.
  • the object information acquisition system further comprises a remote output unit, the remote output unit comprises a remote panoramic augmented reality device, and the panoramic augmented reality device comprises a reflective surround display device, the overall environment to be recognized received from the object information acquisition system
  • the fisheye image is displayed on the reflective surround display device in a panoramic scene, wherein the object information of the identifiable object detected by the object information acquisition system is displayed in an augmented reality in the panoramic scene.
  • the invention also provides an intelligent telematics communication system, comprising the above 360 degree surround orientation and position sensing object information acquiring system.
  • the invention also provides an intelligent car navigation device, which can perform positioning and intelligent navigation according to identifiable objects in the surrounding environment, and is equipped with the above-mentioned 360-degree surrounding azimuth and position sensing object information acquiring system.
  • a magnetometer device is also included, the magnetometer device being operative to cooperate with the system positioning arithmetic unit to provide absolute positioning information of the smart car navigation system.
  • the invention also provides an intelligent guide blind ⁇ , which can provide guidance information for the visually impaired person to identify the identifiable object in the surrounding environment, and install the above-mentioned 360-degree surrounding azimuth and position sensing object information acquiring system, and the guiding information includes identifiable information. At least one of direction information, positioning information, and object information of the object.
  • a magnetometer device is also included, the magnetometer device being operative to cooperate with the system positioning arithmetic unit to provide absolute positioning information of the intelligent guide blind.
  • Using the fisheye lens to take photos in the environment can overcome many problems with WI-FI and GPS positioning, because the fisheye lens is an optical principle, which is different from electromagnetic wave signal transmission, and there will be signal loss when electromagnetic waves are transmitted in space. Buildings block, because reflection causes multiple channels and other problems; and pictures as a good carrier of position information, there is no such problem, and can be achieved by taking pictures in the environment, at close range or even close distance Positioning has a unique advantage.
  • the prior art relying solely on the image recognition technology, it is quite technically difficult to accurately obtain the geographical position of the identifiable object from the single or multiple captured images; however, by recognizing the captured fisheye image Object code is no longer a technical problem.
  • the fisheye image is captured, and the graphic recognition object code can accurately locate the geographical position of the identifiable object through the communication connection network end, which is different from directly transmitting the electromagnetic wave signal through the mobile terminal to determine the geographical position of the identifiable object, and pass the object in advance.
  • the code will record the geographical location of the identifiable object to avoid the signal loss of the electromagnetic wave signal during transmission, and the uncontrollable failure positioning caused by the multi-path after the building block or reflection, avoiding each positioning process (for the same positioning target) because The positioning error caused by the difference in real-time signals, thus misleading the user to the target location that is not expected to arrive.
  • the invention can improve the recognition and positioning function for long distances and even small objects; and, by the precise positioning of the present application, the prior art can be realized. Because of the lack of positioning accuracy, many practical applications, such as autonomous robots or blind people, are difficult or difficult to achieve when they go to unfamiliar public places.
  • FIG. 1 is a schematic diagram showing the cooperation principle between a core functional unit, a core data element, and a peripheral device of a 360 degree surround orientation and position sensing object information acquiring system according to a preferred embodiment of the present invention
  • FIG. 2 is a structural block diagram of a 360-degree surround orientation and position sensing object information acquiring system according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing the calculation of the relative direction calculation of the identifiable object and the fisheye lens in the fisheye image when shooting with a fisheye lens according to an embodiment of the present invention
  • FIG. 4A is a cross-sectional view of a 360-degree direction (in an inverted dome type) taken in a direction perpendicular to the viewing plane of the fisheye lens with a vertical FOV greater than 180 degrees, in accordance with an embodiment of the present invention.
  • 4B is a schematic diagram showing the relationship between the elevation angle and the image resolution when the optical axis direction of the fisheye lens is vertically downward with respect to the viewing plane;
  • 4C is a schematic diagram showing the relationship between the elevation angle and the image resolution when the optical axis of the fisheye lens is vertically upward with respect to the viewing plane;
  • FIG. 5 is a structural block diagram of a 360 degree surround orientation and position sensing object information acquiring system according to another embodiment of the present invention.
  • FIG. 6 is a structural block diagram of a 360-degree surround azimuth and position sensing object information acquiring system according to still another embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a 360 degree surround orientation and position sensing object information acquiring system applied to a smart car navigation system according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of a 360 degree surround orientation and position sensing object information acquiring system applied to an intelligent guide blind ⁇ according to an embodiment of the present invention
  • Figure 9 is a schematic view showing an embodiment of a different installation form of a fisheye camera and a modification thereof in the embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a 360 degree surround orientation and position sensing object information acquiring system applied to an intelligent remote shopping system according to an embodiment of the present invention
  • FIG. 11 is a schematic diagram of a robot with a fisheye camera incorporating a 360 degree surround orientation and position sensing object information acquisition system according to an embodiment of the present invention, respectively showing a fisheye camera and three fisheyes.
  • Camera robot
  • Figure 12 is a schematic illustration of an easily identifiable label pattern in an embodiment of the present invention showing a two-dimensional code disposed at the center of the label pattern and a Morse code disposed at the edge of the label pattern.
  • the invention provides a 360 degree surround orientation and position sensing object information acquiring system based on 360 degree digital fisheye image (A 360-degree surround direction and Position aware object information retrieval system based on 360-degree fisheye Digital camera images), a 360-degree fisheye image obtained by each fisheye observation point of the fisheye image capturing unit, and an image processing method for 360-degree surrounding object detection of the overall environment surrounding the fisheye observation point of the system To sense the 360-degree orientation of the object and the 360-degree spatial distance and position of the perceived object, wherein the label-identifiable object identifies the image of the label-recognizable object in the overall image by partial scanning to extract the embedded object
  • the information includes, but is not limited to, a network (e.g., Internet) information address, and accesses the object network (e.g., the Internet) information address in real time to obtain a network (e.g., Internet) information of the object, and the object image is obtained for the non-label type identifiable object.
  • An image recognition service via a network (such as
  • 1 is a schematic diagram showing the principle of cooperation between a core functional unit of a 360-degree surround orientation and position-aware object information acquisition system based on a 360-degree digitally imaged fisheye image and a peripheral optional device and a core data element according to a preferred embodiment of the present invention
  • 2 is a block diagram showing the core structure of a 360-degree surround orientation and position-aware object information acquisition system based on a 360-degree digitally imaged fisheye image according to an embodiment of the present invention.
  • the 360 degree surround orientation and position sensing object information acquisition system implements various functions and applications of the present invention by working in conjunction with peripheral optional devices by its core functional unit, wherein the left side of the figure is listed.
  • the external input or optional device required to complete the work of the core function unit of the object information acquisition system starts the trigger of the fisheye camera, the object type information input by the user, the system working range set by the user, the cloud table with the digital camera Optical telescopes, magnetometer devices, etc.
  • the core data elements generated from each core functional unit are listed on the right side of the figure, and the final output information of the 360-degree surround orientation and position-aware object information acquisition system is shown below (eg The video or audio, etc., with respect to the core functional units of the present invention will be explained in further detail below with reference to FIG. 2.
  • the present invention provides a 360 degree surround orientation and position sensing object information acquiring system.
  • the object information acquiring system includes a fisheye image capturing unit 10, an object detecting unit 20, an object information acquiring unit 30, and an object orientation computing unit. 40.
  • the system positioning operation unit 50 respectively corresponds to each core function unit shown in FIG. 1, and the functions thereof are respectively briefly described below.
  • the fisheye image capturing unit 10 photographs the surrounding environment to be recognized using a fisheye camera to obtain an overall fisheye image of the environment to be recognized.
  • the fisheye image capturing unit 10 of the embodiment of the present invention uses one or more fisheye cameras to photograph the surrounding environment to be recognized, and the optical axis of the fisheye digital camera is perpendicular to the plane of view (plane of Direction of view, or view plane), to obtain a 360-degree overall fisheye image of the environment to be identified. Since the fisheye camera has the characteristics of short focal length and large viewing angle, the overall fisheye image captured by the fisheye image is larger.
  • the use of ultra-wide-angle fisheye camera for shooting in the identification environment can overcome the traditional limitations of WI-FI and GPS positioning, because the image captured by the fisheye camera is generated based on the optical principle, which is different from the electromagnetic wave signal.
  • Conduction there are signal losses when electromagnetic waves are transmitted in space, or positioning failure due to building blockage; on the contrary, images are unique as location information, especially in indoor environments.
  • the object detecting unit 20 communicates with the fisheye image capturing unit 10 for detecting at least one identifiable object located in the environment to be recognized from the overall fisheye image according to different object categories. Specifically, the object detecting unit 20 receives the overall fisheye image captured by the fisheye image capturing unit 10, and then according to different object categories (the object category can be classified into a tag type identifiable object and a non-label type identifiable object) from the whole. At least one identifiable object located in the environment to be identified is detected in the fisheye image, thereby ultimately achieving optically accurate positioning of the identifiable object.
  • the object category is used to define a specific pattern or contour that the object detecting unit 20 detects from the overall fisheye image, and the user can arbitrarily set or select the object category according to different applications and actual situations.
  • the object category may be a specially customized label-like identifiable object, such as an easily identifiable label pattern, or a system-preset non-label identifiable object, such as a specific object outline pattern (person, table, Cars, etc.).
  • the image of the tag object in the overall image is scanned/identified to extract the included object information, including but not limited to the network of the object (for example, the Internet) information address, and real-time access to the object network (such as the Internet) information address to obtain the network (such as the Internet) information of the object, if the object category is a non-label type identifiable object, the object image is transmitted through the network (for example Internet) image recognition service to identify objects and obtain identification object information;
  • the network for example Internet
  • the object orientation computing unit 40 communicates with the object detecting unit 20 for determining, for each identifiable object detected by the object detecting unit 20, a corresponding position point of each identifiable object in the overall fisheye image, and A 360 degree orientation of each identifiable object relative to the fisheye camera in the environment to be identified is calculated from the imaging parameters of the fisheye camera.
  • the system positioning operation unit 50 communicates with the object orientation operation unit 40 and/or the object information acquisition unit 30 for using the orientation information of at least two different sets of identifiable objects provided by the object orientation operation unit 40 with respect to the fisheye camera.
  • the triangulation method is a well-known method in the field of positioning technology, and is widely applied in the conventional positioning method, and therefore this article will be briefly explained.
  • the object information acquiring unit 30 scans the object code on the easily identifiable label pattern to obtain a network bound to the identifiable object (for example, the Internet). Channel, and then access any network object information by accessing the network (such as the Internet) channel, which is simple and fast, and can obtain object information (such as audio, video or text) in multimedia form and is not easy to make mistakes.
  • the object category is a specific object contour pattern preset by the system
  • the object detecting unit performs the overall image by an artificial intelligence image service (such as face recognition technology, object contour recognition technology) built in or through a network (for example, the Internet). Object detection, which enhances the user experience.
  • the easily identifiable label pattern is disposed at an appropriate position in the environment to be recognized, so that the fisheye image capturing unit 10 performs photographing.
  • the easy-to-identify label pattern is set at a position suitable for the fish-eye image capturing unit 10 to be photographed in the environment to be recognized, so that the fish-eye image capturing unit 10 can photograph the easily-recognizable label pattern, thereby further facilitating the object information.
  • the obtaining unit 30 scans the object code on the identifiable label pattern to obtain a network (for example, Internet) URL bound to the identifiable object, and obtains object information by accessing a network (for example, the Internet) website, thereby improving the recognition rate and improving the recognition rate. Identify efficiency.
  • the easily identifiable label pattern is preferably a high contrast double square pattern, in which case the item code can be a two dimensional code affixed or printed within the double square pattern.
  • the image on the left side of FIG. 12 the easy-to-identify label pattern is a high-contrast double-square pattern
  • the object code is a two-dimensional code pasted or printed in a double square pattern
  • the object information acquisition system of the embodiment of the present invention passes
  • the object information acquiring unit 30 scans the two-dimensional code to obtain a network (for example, Internet) channel bound to the identifiable object, and obtains the object information by accessing a network (for example, the Internet) channel, and the object information acquiring system of the embodiment of the present invention passes Scanning the QR code to obtain the object information has the advantages of large amount of information, easy identification, and low cost.
  • the Morse code containing the basic information of the identifiable object can be further embedded in the edge of the double square pattern.
  • the image located on the right side of FIG. 12 Morse Code is embedded in the edge of the double square pattern.
  • the example Morse code in the figure means a road sign, and the Morse code can also include an identifiable object. Map location information, other basic information, etc.
  • the object information acquiring system of the embodiment of the present invention expands the specific use range of the easily identifiable label pattern by embedding the Moir code of the basic information contained in the identifiable object in the edge of the double square pattern, and the use field is wide.
  • the object category is not particularly limited as long as the identifiable object It can be easily identified by existing image recognition technology.
  • the identifiable items can be maintained and set up by specific participants, such as in a mall, which can be set by the property company, and can be pre-identified by the manufacturer on a particular product.
  • the object information acquisition system further includes a remote output unit, the remote output unit includes a remote panoramic augmented reality device, and the panoramic augmented reality device includes a reflective surround display device, which is received from the object information acquisition system.
  • the overall fisheye image of the recognition environment is displayed on the reflective surround display device in a panoramic scene, wherein the object information of the identifiable object detected by the object information acquisition system is displayed in a panoramic manner in an augmented reality manner.
  • the process of obtaining a fisheye image using a fisheye camera and calculating the direction information of the identifiable object relative to the fisheye camera in an actual shooting environment is described below with reference to FIGS. 3-4, for the same identifiable object, according to Two or more directions information is used to calculate the position information.
  • FIG. 3 is a schematic diagram showing the principle of calculating relative direction information of an identifiable object and a fisheye lens in a fisheye image when photographing with a fisheye lens according to an embodiment of the present invention
  • FIG. 3 is shown below and The principle of object orientation (elevation and azimuth with respect to the viewing plane) resulting from the principles described in 4 (4A, 4B, and 4C).
  • the image position P for example, the center of the detected object or a reference point on the object
  • Point P' ie the position of point P in the picture
  • the actual corresponding position of point P' in space is the spatial position point P, which has an elevation angle with respect to the viewing plane and an azimuth with respect to the system direction reference line.
  • the point P' is spaced from the optical axis of the fisheye image by an axial distance R (in pixels), and the corresponding P point of the point P with respect to the viewing plane is calculated by the fisheye function of the fisheye using the R value.
  • the angle with the optical axis in this case, the nadir angle.
  • Elevation angle angle between P point and optical axis – 90 degrees
  • the azimuth of the direction reference line relative to the system is the angle formed by the radial line of the point P' on the fisheye image with the reference line.
  • FIG. 4A is a schematic side view showing the orientation of the optical axis of the fisheye lens perpendicular to the viewing plane; and FIG. 4B is a view showing the relationship between the elevation angle and the image resolution when the optical axis of the fisheye lens is vertically downward with respect to the viewing plane.
  • FIG. 4C is a schematic diagram showing the relationship between the elevation angle and the image resolution when the optical axis of the fisheye lens is vertically upward with respect to the viewing plane.
  • the viewing plane defines the zero elevation angle of the three-dimensional view field at the fisheye viewing point of the 360 degree surround view, and includes the line of sight.
  • the viewing direction may be any viewing direction, or it may be the front of the system or the forward/moving direction, as in the case of the "smart positioning driving" application of the invention described in the following patent specification; the viewing direction may be a non-ground plane Direction, such as the direction of the robot's eye.
  • the viewing plane may be a ground plane or a non-ground plane that varies with the orientation of the system, as in the case of the smart guide stick application described below in this patent specification.
  • the orientation of the fisheye optical axis should be oriented in a direction perpendicular to the viewing plane of the viewing point (e.g., by fixture or mounting), i.e., pointing upward or downward relative to the viewing plane to obtain an environment to be identified.
  • the 360 degree surrounds the system's overall fisheye image.
  • the space of all visible directions of the fisheye can be inverted with a dome (as shown in Figure 4A, the fisheye axis is vertically pointing downward with respect to the viewing plane), and the non-inverted dome (when the fisheye axis is pointing vertically with respect to the viewing plane) Description, as described by the dotted line in the cross-sectional view.
  • the larger the FOV of the fisheye/super fisheye the more it can observe and detect the spatial orientation of the objects used in the surrounding environment using the 360-degree fisheye image.
  • FIG. 4B and 4C A different perspective view of the fisheye camera is shown in Figures 4B and 4C, with the optical axis pointing downward in Figure 4B and pointing upward in Figure 4C.
  • the same elevation angle or elevation angle
  • All directions of the viewing plane ie, 360 degrees
  • all 360 degree directions of any other elevation angle are also represented by their corresponding circles around the center of the fisheye optical axis.
  • the fisheye image in Figures 4B and 4C represents a 360 degree surround image with a viewing angle of zero elevation.
  • the elevation angle changes from a negative value to a zero degree (viewing plane) to a positive value when moving outward from the center of the circular fisheye image.
  • the circumference of the circle and the number of pixels increase, so we see that the "circular" resolution of the image is also higher for higher elevation angles (or elevation angles) relative to the viewing plane.
  • the corresponding converted flat (rectangular) panorama is displayed on the right side, with the lower part being the lower elevation angle and the upper part being the higher elevation angle. If a circular fisheye image is converted to this standard flat image, the upper portion will have a higher resolution for a higher elevation angle.
  • the lower panoramic image is lower.
  • the elevation angle actually has a lower resolution, so different degrees of image magnification, such as image interpolation, are required to achieve the same number of pixels as in the higher elevation angle.
  • a higher elevation angle is better for object detection and image processing accuracy/accuracy.
  • the lower elevation angle has a higher "circular" resolution than the higher elevation angle.
  • the lower elevation angle is better for object detection and image processing accuracy/accuracy.
  • high-resolution light-sensing components that currently provide very high-resolution digital images, high image processing accuracy/accuracy can be achieved at elevations with less "circular" resolution/pixels.
  • the object information acquiring system according to another embodiment of the present invention further includes an optical telescope unit 60, and the optical telescope unit 60 is connected to the system positioning operation unit 50.
  • the aiming and focusing of the identifiable object are automatically performed according to the relative positioning information provided by the system positioning operation unit 50.
  • the object information acquiring system of the embodiment of the present invention can automatically perform the aiming and focusing of the identifiable object and the digital shooting by setting the optical telescope unit 60, thereby overcoming the recognition caused by the distance between the identifiable object and the small object.
  • the problem of lower degree is higher, that is, the object information acquiring system of the embodiment of the present invention has a unique advantage by means of the optical telescope unit 60 in both long-distance and close-range positioning.
  • the object information acquiring system further includes an object information output unit 70, and the object information output unit 70 is connected to the system positioning operation unit 50 and/or the object information acquiring unit 30, and the object information is output.
  • the unit 70 is configured to output object information, object direction information, and object orientation information.
  • the object information output unit 70 further includes a display unit and/or a voice unit, that is, the object information output unit 70 may include one of a display unit and a voice unit, and may also include a display unit and a voice unit.
  • the display unit is configured to visually display at least one of direction information, positioning information, and object information of the identifiable object; the voice unit is configured to audibly broadcast at least one of direction information, positioning information, and object information of the identifiable object Kind.
  • the number of fisheye cameras may be one, and shooting is performed at at least two different positions to obtain a fisheye image, so that the object orientation computing unit 40 can provide at least two different sets.
  • Directional information relative to the fisheye camera By setting the number of fisheye cameras to one and arranging for shooting at at least two different locations, not only can at least two different sets of directional information relative to the fisheye camera be provided, but also the number of fisheye cameras Only one can reduce costs and shrink size.
  • the number of fisheye cameras of the fisheye image capturing unit 10 is preferably two or more positions that are relatively fixed in position, so that the object orientation computing unit 40 can provide at least two different sets of cameras relative to the fisheye.
  • Direction information By arranging the number of fisheye cameras to two or more positions that are relatively fixed, not only can the function of providing at least two different sets of directional information relative to the fisheye camera be achieved, but also due to the two or more fisheye cameras The number is relatively fixed, which avoids moving or adjusting the camera position, which saves time and improves work efficiency.
  • the present invention provides an intelligent car navigation system capable of positioning and intelligently navigating according to an identifiable object in a surrounding driving environment, and the smart car navigator is equipped with the above-mentioned 360-degree digital shooting fisheye image.
  • 360 degree surround orientation and position sensing object information acquisition system Preferably, a high-contrast double square pattern is set as an easily recognizable label pattern on a street sign or a building on both sides of the road, and a two-dimensional code pattern is printed or pasted in the center of the double square pattern, by scanning the two-dimensional code and accessing the network (for example, the Internet) can get any information about the location of the car or the surrounding buildings.
  • a magnetometer device can be provided, the magnetometer device being used in conjunction with the system positioning arithmetic unit to provide absolute positioning information of the smart car navigation system.
  • the smart car navigation system of the embodiment of the present invention has the following steps for positioning and intelligently navigating according to identifiable objects in the surrounding environment:
  • the driver activates a smart car navigator that includes a 360 degree surround orientation and position sensing object information acquisition system based on a 360 degree digital fisheye image.
  • the smart car navigator can be integrated into the vehicle or installed as a stand-alone device in a suitable location on the vehicle.
  • the smart car navigation system comprises two fisheye cameras, which are respectively disposed on the left and right sides of the vehicle near the front windshield, and the driver activates the smart car navigation system including the 360 degree surrounding orientation and the position sensing object information acquiring system. After that, the fisheye camera of the smart car navigator began to capture images of the surrounding environment.
  • the fisheye image capturing unit including two fisheye cameras captures a 360° surrounding environment image; the object detecting unit detects a double square pattern in the captured image as an easy-to-identify label pattern; and the object information acquiring unit scans the double square figure
  • the two-dimensional code in the case is linked to a specific web address to read the position information of the high-precision global map; the object orientation operation unit calculates the direction information of the double square pattern; the system positioning operation unit uses the triangulation method to measure the relative of each double square pattern. location information.
  • the fisheye image capturing unit starts capturing and capturing the surrounding environment to be recognized, usually a 360° surrounding environment, in order to obtain an overall fisheye image of the environment to be identified.
  • the object detecting unit obtains a double square pattern from the overall fisheye image, and the double square pattern includes two-dimensional code information.
  • the object information acquiring unit reads the position information of the high-precision global map by using the link URL provided by the two-dimensional code in the double square pattern, and the map position information usually includes the longitude and the latitude, and the high precision can be obtained by the unique determination of the longitude and the latitude. Location information for the global map.
  • the object orientation arithmetic unit calculates direction information from the object to be identified (for example, the double square pattern).
  • the system positioning operation unit uses triangulation to measure the relative position information of each object to be identified relative to the smart car navigation system.
  • the detection flag for setting the double square pattern may include a road sign, a store sign, and the like, and the advertisement information of the store includes audio information and video information that can be further downloaded from the network link URL of the logo.
  • the object information output unit can construct and output high-precision map position information based on the high-precision map position information obtained above.
  • the additional display unit of the system displays a brief description of the block map information of the vehicle, the network of the store logo (eg, the Internet) by means of the calculated map position and orientation; the driver may further click on the displayed location, Listen to the audio ad or watch the ad video to select the store location on the block map; the system display unit/speech unit outputs the audio ad or ad video of the selected store.
  • the detailed information about the identifiable object (for example, longitude information and latitude information) is input in advance, and the object information acquiring unit acquires, thereby avoiding each of the reasons caused by electromagnetic wave conduction in the real-time positioning process.
  • the information is generally updated by the third party in real time, thereby avoiding the problem that the existing map is updated slowly and causing the positioning failure, for example, when a store is renamed, when used The shop's updated store name is used as the target arrival place.
  • the existing maps often fail to locate the store name in the map.
  • the way to rely on electromagnetic wave signal positioning depends largely on receiving.
  • the electromagnetic wave itself, and the user often has different environmental factors in each positioning process. For example, in a certain positioning, the signal emitted by the user is blocked by the obstacle, and the result is often the same positioning address input every time. But get different target locations.
  • the above defects are solved by first obtaining the exact location of the identifiable object and then associating it with the map. Because the store is renamed, the latitude and longitude information of the store has not undergone any substantial change, so it will not be because of the map information. Not being updated in time or the name of the store has changed. The above examples are only used to explain the idea of the present invention.
  • the present application is not limited to the case where the store is renamed, the building is demolished, the building is renamed, the merchant is renamed, or the like is not separated from the center. In this case, it should also be covered in the invention.
  • the present invention further provides an intelligent guide blind, which can provide guidance information for an identifiable object in a surrounding environment to a visually impaired person, and the intelligent guide blind is installed with the above 360 degree surround.
  • the orientation and position sensing object information acquiring system includes at least one of direction information, positioning information, and object information of the identifiable object.
  • Fig. 9 is a schematic view showing an embodiment of a different installation form of a fisheye camera and a modification thereof in the embodiment of the present invention.
  • the smart guide blind is provided with two fisheye image capturing units 10 in the vertical direction, wherein the fisheye located above The photographing direction of the image capturing unit 10 is upward, and the photographing direction of the fisheye image capturing unit 10 located below is downward, and the photographing ranges of the two fisheye image capturing units 10 overlap; in the example in the middle in FIG.
  • the smart guide blind is provided with two fisheye image capturing units 10 in the vertical direction, wherein the fisheye image capturing unit 10 located above has a shooting direction upward, and the fisheye image capturing unit 10 located below is also photographed upward.
  • the shooting ranges of the two fisheye image capturing units 10 also overlap; in the example of the rightmost side in FIG. 9, the smart guide blinds are provided with four fisheye image capturing units 10 in the vertical direction, and four fishes.
  • the photographing directions of the eye image capturing unit 10 are all upward, and at least two of the fisheye image capturing units 10 have overlapping shooting ranges.
  • more fisheye image capturing units 10 can be disposed in the vertical direction, so that not only space in the horizontal direction can be saved, but also multiple fisheye image capturing units 10 can be used to obtain additional height measurements and Accurate measurement, for example, can be applied to higher robots or safety inspections, object search systems in high-rise buildings, and so on.
  • the visually impaired person walks on a blind road with a 360-degree surround orientation and position-sensing object information acquisition system based on a 360-degree digital fisheye image, and is provided alongside the blind road surface or the blind road.
  • a double-square label pattern with a QR code a store with drinks and food on both sides of the road (not necessarily with an easy-to-recognize label pattern), and a fisheye camera on the guide stick for 360 degrees of each system fisheye observation point Fisheye image, image processing method for 360 degree surrounding object detection of the overall environment surrounding the fisheye observation point of the system, to sense the 360 degree orientation of the object and the 360 degree spatial distance and position of the perceived object, wherein the label A class-recognizable object (a double-square label pattern with a QR code) identifies an image of the tag-recognizable object in the overall image by partial scanning to extract an information address of the embedded object network (eg, the Internet) as object information, and access The object network (such as the Internet) information address to obtain the network (such as the Internet).
  • the direction information and the orientation information are calculated by the object orientation operation unit 40 and the system positioning operation unit 50, and the voice broadcast is performed.
  • the visually impaired person can accurately obtain the information of the beverage, the food, the street, and the like on the street by the object information acquiring unit 30, and obtain the specific object through the object orientation operation unit 40 and the system positioning operation unit 50.
  • Directional information and orientation information, the guide cane is equivalent to the eyes of the visually impaired, convenient for the visually impaired. Can replace the guide dog, can also purchase items according to the prompts or find the services needed, the guide cans communicate with the network (such as the Internet), can also avoid the visually impaired, the family can monitor the travel of the visually impaired, may not Professionals and family members are the gospel of the visually impaired.
  • an intelligent telematics communication system is used for intelligent remote shopping, including the 360 degree surround orientation and position sensing object information acquiring system described above; and remotely interacting with the object information acquiring system.
  • the panoramic augmented reality device comprises a reflective surround display device, and the overall fisheye image of the environment to be recognized received from the object information acquisition system is displayed on the reflective surround display device in a panoramic scene, wherein the object The object information of the identifiable object detected by the information acquisition system is displayed in a panoramic manner in an augmented reality manner.
  • the reflective surround display device herein is preferably based on the Chinese utility model patent ZL201720316246.9 and the Hong Kong short-term patent HK1229152, the inventor is Luo Zhenbang, and the patentee is the solution to the blade (Solved) By Technology Co., Limited) is a reflective surround display system.
  • FIG. 11 is a schematic diagram of a robot with a fisheye camera incorporating a 360 degree surround orientation and position sensing object information acquisition system according to an embodiment of the present invention, respectively showing a fisheye camera and three fisheyes.
  • Camera robot. a robot located on the left side of FIG. 11 , which is provided with a movable fisheye image capturing unit 10, which can change the position of the fisheye camera on the robot to take a plurality of fisheye images, which can be
  • the moving fisheye image capturing unit 10 is capable of photographing a plurality of fisheye images in the environment from different positions in the vertical direction as a positioning operation; referring to FIG. 11, the robot located on the right side in FIG.
  • the 11 has three positions on the robot a fixed fisheye image capturing unit 10, the three fisheye image capturing units 10 are sequentially arranged in a vertical direction, wherein the upper two fisheye image capturing units 10 have higher shooting for a higher altitude environment
  • the resolution, the lower fisheye image capture 10 has a higher resolution for lower altitude environments and has additional, more accurate measurements as a positioning operation.
  • the acquisition system can be set in a store remote shopping front-end device (above, referring to FIG. 10, the store remote shopping front-end device may specifically be a robot, and the robot can be wirelessly connected with the panoramic augmented reality device, for example, the robot can communicate with the mobile communication network
  • the panoramic augmented reality device is connected, and the fisheye image capturing unit 10 is disposed on the robot.
  • the fisheye image capturing unit 10 can be arranged with reference to the setting shown in FIG. 11, and the products in the store are respectively labeled with a double square pattern QR code label.
  • Each store has a unique two-square pattern QR code label with a pan/tilt optical telescope including a fisheye camera that is used to stretch and capture a two-dimensional code located further away.
  • the plurality of fisheye images captured by the fisheye image capturing unit 10 of the above robot are processed to form a fisheye panoramic augmented reality image, which is transmitted to a remote client computer and passed through a reflective surround display device. Display, so that users can feel the product information in the remote store immersively without leaving the house, which enhances the user experience.
  • the remote shopping system of the embodiment of the present invention has the following work flow: a remote network (such as the Internet) customer communicates with a customer service through a network (such as the Internet), and selects a robot with a digital fisheye camera, and the robot automatically Walking between the merchandise display racks, the fisheye camera takes a realistic fisheye panorama of the merchandise, and the remote network client is equipped with a computer with a reflective surround display device to output a realistic fisheye panoramic image, which can be accessed by a remote network (eg internet) customer
  • a remote network eg, Internet
  • the network obtains information feedback from the network (such as the Internet) for customer reference, and calculates what direction and orientation the product is in the robot.
  • a remote network such as the Internet
  • the robot accurately places the item into the shopping cart based on the calculated orientation and direction, and so on, and selects other items.
  • the remote network such as the Internet
  • the mall mails the goods or personally to the remote network (such as the Internet) customer residence.
  • the robot with 360-degree surround orientation and position-aware object information acquisition system based on 360-degree digital fisheye image can also be used in the express sorting system to accurately classify the express.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position sur la base d'une capture d'image numérique de type fisheye sur 360 degrés. Le système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position comprend une unité de capture d'image de type fisheye, une unité de détection d'objet, une unité d'acquisition d'informations d'objet, une unité de calcul d'orientation d'objet et une unité de calcul de positionnement de système. Le système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position acquiert, au moyen d'une identification d'image locale ou d'un balayage et de l'Internet, des informations d'objet qui permettent l'identification d'un objet. L'invention concerne en outre diverses applications basées sur le système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position.
PCT/CN2018/118924 2018-02-07 2018-12-03 Système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position, et application de celui-ci WO2019153855A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
HK18101932 2018-02-07
HK18101932.9 2018-02-07

Publications (1)

Publication Number Publication Date
WO2019153855A1 true WO2019153855A1 (fr) 2019-08-15

Family

ID=67548770

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118924 WO2019153855A1 (fr) 2018-02-07 2018-12-03 Système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position, et application de celui-ci

Country Status (2)

Country Link
TW (1) TWM580186U (fr)
WO (1) WO2019153855A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444796A (zh) * 2020-03-13 2020-07-24 深圳前海达闼云端智能科技有限公司 售货机器人的商品摆放判断方法及装置
US20220092626A1 (en) * 2020-09-24 2022-03-24 Toshiba Tec Kabushiki Kaisha Commodity purchase system, relay server, and registration device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI699198B (zh) * 2019-08-21 2020-07-21 亞東技術學院 視障者智能輔助系統
CN112507752A (zh) * 2020-11-16 2021-03-16 吴沁远 一种基于图像码遮挡的辅助阅读装置及方法
CN113190027B (zh) * 2021-02-26 2022-11-22 中国人民解放军军事科学院战争研究院 一种面向空中态势感知的空间剖分方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452292A (zh) * 2008-12-29 2009-06-10 天津理工大学 基于序列双色点阵式航标的鱼眼镜头全向视觉制导方法
US20120033070A1 (en) * 2010-08-09 2012-02-09 Junichi Yamazaki Local search device and local search method
CN103119611A (zh) * 2010-06-25 2013-05-22 天宝导航有限公司 基于图像的定位的方法和设备
CN103234543A (zh) * 2013-04-26 2013-08-07 慕林 基于二维码或/和nfc的定位导航系统及其实现方法
CN105403235A (zh) * 2014-09-15 2016-03-16 吴旻升 二维空间定位系统及方法
CN106643801A (zh) * 2016-12-27 2017-05-10 纳恩博(北京)科技有限公司 一种定位准确度的检测方法及电子设备
CN106664349A (zh) * 2014-03-25 2017-05-10 6115187加拿大公司暨伊美景象公司 通过记录、共享和处理与广角图像相关联的信息的对系统行为或用户体验的自动化定义
CN107180215A (zh) * 2017-05-31 2017-09-19 同济大学 基于库位和二维码的停车场自动建图与高精度定位方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101452292A (zh) * 2008-12-29 2009-06-10 天津理工大学 基于序列双色点阵式航标的鱼眼镜头全向视觉制导方法
CN103119611A (zh) * 2010-06-25 2013-05-22 天宝导航有限公司 基于图像的定位的方法和设备
US20120033070A1 (en) * 2010-08-09 2012-02-09 Junichi Yamazaki Local search device and local search method
CN103234543A (zh) * 2013-04-26 2013-08-07 慕林 基于二维码或/和nfc的定位导航系统及其实现方法
CN106664349A (zh) * 2014-03-25 2017-05-10 6115187加拿大公司暨伊美景象公司 通过记录、共享和处理与广角图像相关联的信息的对系统行为或用户体验的自动化定义
CN105403235A (zh) * 2014-09-15 2016-03-16 吴旻升 二维空间定位系统及方法
CN106643801A (zh) * 2016-12-27 2017-05-10 纳恩博(北京)科技有限公司 一种定位准确度的检测方法及电子设备
CN107180215A (zh) * 2017-05-31 2017-09-19 同济大学 基于库位和二维码的停车场自动建图与高精度定位方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444796A (zh) * 2020-03-13 2020-07-24 深圳前海达闼云端智能科技有限公司 售货机器人的商品摆放判断方法及装置
US20220092626A1 (en) * 2020-09-24 2022-03-24 Toshiba Tec Kabushiki Kaisha Commodity purchase system, relay server, and registration device

Also Published As

Publication number Publication date
TWM580186U (zh) 2019-07-01

Similar Documents

Publication Publication Date Title
US11100260B2 (en) Method and apparatus for interacting with a tag in a wireless communication area
US11194938B2 (en) Methods and apparatus for persistent location based digital content
WO2019153855A1 (fr) Système d'acquisition d'informations d'objet capable d'une orientation panoramique sur 360 degrés et d'une détection de position, et application de celui-ci
US11893317B2 (en) Method and apparatus for associating digital content with wireless transmission nodes in a wireless communication area
JP5844463B2 (ja) 屋内測位のためのロゴ検出
US11640486B2 (en) Architectural drawing based exchange of geospatial related digital content
CN103119611B (zh) 基于图像的定位的方法和设备
US7991194B2 (en) Apparatus and method for recognizing position using camera
US7088389B2 (en) System for displaying information in specific region
JP2004264892A (ja) 運動検出装置及び通信装置
US11106837B2 (en) Method and apparatus for enhanced position and orientation based information display
US12014450B2 (en) Methods and apparatus for secure persistent location based digital content associated with a two-dimensional reference
CN105865419A (zh) 基于地面特征的移动机器人的自主精确定位系统及方法
US20220164492A1 (en) Methods and apparatus for two dimensional location based digital content
WO2022121024A1 (fr) Procédé et système de positionnement de véhicule aérien sans pilote sur la base d'une communication optique sur écran
US11475177B2 (en) Method and apparatus for improved position and orientation based information display
KR101720097B1 (ko) 사용자 기기의 측위방법
US12086507B2 (en) Method and apparatus for construction and operation of connected infrastructure
CN103557834B (zh) 一种基于双摄像头的实体定位方法
KR20180113158A (ko) 위치 검출들을 그래픽 표시로 맵핑하기 위한 방법, 디바이스, 및 시스템
Grejner-Brzezinska et al. From Mobile Mapping to Telegeoinformatics
Xu et al. Indoor-outdoor navigation without beacons: compensating smartphone AR positioning errors with 3D pedestrian network
Liaw et al. The Simulation of the Indoor Positioning by Panoramic Camera and Point Cloud Scanner
US20240045064A1 (en) METHODS AND SYSTEMS FOR DATA MAPPING USING ROADSIDE LiDAR SENSOR DATA AND GEOGRAPHIC INFORMATION SYSTEM (GIS) BASED SOFTWARE
Qian et al. Indoor visible light positioning method based on three light-emitting diodes and an image sensor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905255

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905255

Country of ref document: EP

Kind code of ref document: A1