WO2019214641A1 - 基于光标签的信息设备交互方法及系统 - Google Patents

基于光标签的信息设备交互方法及系统 Download PDF

Info

Publication number
WO2019214641A1
WO2019214641A1 PCT/CN2019/085997 CN2019085997W WO2019214641A1 WO 2019214641 A1 WO2019214641 A1 WO 2019214641A1 CN 2019085997 W CN2019085997 W CN 2019085997W WO 2019214641 A1 WO2019214641 A1 WO 2019214641A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
terminal device
optical
optical tag
information device
Prior art date
Application number
PCT/CN2019/085997
Other languages
English (en)
French (fr)
Inventor
牛旭恒
方俊
李江亮
Original Assignee
北京外号信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京外号信息技术有限公司 filed Critical 北京外号信息技术有限公司
Priority to KR1020207035360A priority Critical patent/KR20210008403A/ko
Priority to JP2021512990A priority patent/JP7150980B2/ja
Priority to EP19799577.2A priority patent/EP3792711A4/en
Publication of WO2019214641A1 publication Critical patent/WO2019214641A1/zh
Priority to US17/089,711 priority patent/US11694055B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/067Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
    • G06K19/07Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
    • G06K19/0723Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
    • G06K19/0728Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs the arrangement being an optical or sound-based communication interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0308Detection arrangements using opto-electronic means comprising a plurality of distinctive and separately oriented light emitters or reflectors associated to the pointing device, e.g. remote cursor controller with distinct and separately oriented LEDs at the tip whose radiations are captured by a photo-detector associated to the screen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/1141One-way transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds

Definitions

  • the present invention relates to the field of optical information technology and location services, and more particularly to a method and system for device interaction using optical tags.
  • optical tags are also referred to as optical communication devices.
  • the present invention provides a new optical tag-based information device interaction method and system, so that the user can manipulate the devices in the field of view anytime and anywhere, and the WYSIWYG of the device interaction operation.
  • the present invention provides a method for optical device-based information device interaction, the method comprising:
  • the method may further include: adjusting an imaging position of each information device on a display screen of the terminal device in response to a change in a position and/or a posture of the terminal device.
  • the method may further include:
  • the identified operation is converted into a corresponding operational command and sent to the information device over the network.
  • the user's operation on the interactive interface of the information device may be at least one of the following: screen input, keyboard input, voice input, or gesture input.
  • the present invention also provides a system for optical device-based information device interaction, the system comprising one or more information devices, optical tags at relatively fixed positions of the information device, storage and information devices, and optical tags a server for information, a terminal device with an imaging device;
  • the terminal device is configured to:
  • the interactive interfaces of the respective information devices are respectively presented at their imaging positions on the display screen for interworking with the respective information devices.
  • the terminal device may be further configured to adjust an imaging position of each information device on a display screen of the terminal device in response to a change in position and/or posture of the terminal device.
  • the terminal device may also be configured to:
  • the identified operation is converted into a corresponding operational command and sent to the information device over the network.
  • the user's operation on the interactive interface of the information device may be at least one of the following: screen input, keyboard input, voice input, or gesture input.
  • the invention further relates to a computing device comprising a processor and a memory in which is stored a computer program, which, when executed by the processor, can be used to implement the above method.
  • the invention further relates to a storage medium in which is stored a computer program which, when executed, can be used to implement the above method.
  • 1 is a schematic diagram of the basic principle of a triangulation method
  • FIG. 2 is a schematic diagram showing the principle of an imaging process of an imaging device when collecting an optical tag
  • FIG. 3 is a schematic diagram showing a simplified relationship between an object coordinate system and an image coordinate system
  • FIG. 4 is a schematic flowchart diagram of an optical tag-based information device interaction method according to an embodiment of the present invention.
  • Barcodes and QR codes have been widely adopted to encode information. When these barcodes and QR codes are scanned with a specific device or software, the corresponding information is identified.
  • the recognition distance between the barcode and the two-dimensional code is very limited. For example, for a two-dimensional code, when scanning with a mobile phone camera, the phone must typically be placed at a relatively short distance, typically about 15 times the width of the two-dimensional code. Therefore, for long-distance recognition (for example, 200 times the width of the two-dimensional code), barcodes and two-dimensional codes are usually not implemented, or very large barcodes and two-dimensional codes must be customized, but this will bring about an increase in cost. And in many cases it is impossible to achieve due to various other restrictions.
  • the optical tag transmits information by emitting different light, which has the advantages of long distance, visible light condition requirement, strong directivity, and positionability, and the information transmitted by the optical tag can be rapidly changed with time, thereby providing a larger Information capacity (for example, an optical communication device described in Chinese Patent Publication No. CN104168060A, CN105740936A, etc.). Compared with the traditional two-dimensional code, the optical tag has stronger information interaction ability, which can provide great convenience for users and businesses.
  • the optical tag may be any optical communication device capable of transmitting different information by emitting different light.
  • the optical tag can include at least one light source and a controller for controlling different light emitted by the light source to convey different information.
  • the controller can cause the light source to emit different light by changing the properties of the light emitted by the light source.
  • the property of the light may be any property that the optical imaging device (eg, CMOS imaging device) can perceive; for example, it may be an attribute of the human eye that is perceived by the intensity, color, wavelength, etc. of the light, or other attributes that are not perceptible to the human eye.
  • the intensity, color or wavelength of the electromagnetic wavelength outside the visible range of the human eye changes, or any combination of the above properties.
  • a change in the properties of light can be a single property change, or a combination of two or more properties can change.
  • the intensity of the light is selected as an attribute, it can be achieved simply by selecting to turn the light source on or off.
  • the light source is turned on or off to change the properties of the light, but those skilled in the art will appreciate that other ways to change the properties of the light are also possible.
  • the optical tag can be used in the optical tag as long as one of its properties that can be perceived by the optical imaging device can be varied at different frequencies.
  • Various common optical devices can be included in the light source, such as a light guide plate, a soft plate, a diffuser, and the like.
  • the light source may be an LED light, an array of a plurality of LED lights, a display screen or a part thereof, and even an illuminated area of light (for example, an illuminated area of light on a wall) may also serve as a light source.
  • the shape of the light source may be various shapes such as a circle, a square, a rectangle, a strip, an L, or the like.
  • the controller of the optical tag can control the properties of the light emitted by each source to communicate information.
  • "0" or "1" of binary digital information can be represented by controlling the turning on and off of each light source such that multiple light sources in the optical tag can be used to represent a sequence of binary digital information.
  • each light source can be used not only to represent a binary number, but also to represent data in ternary or larger hexadecimal.
  • each light source can represent data in ternary or larger hexadecimal. Therefore, the optical tag of the present invention can significantly increase the data encoding density compared to the conventional two-dimensional code.
  • the controller of the optical tag can control the light source to change the properties of the light it emits at a certain frequency. Therefore, the optical tag of the present invention can represent different data information at different times, for example, different. A sequence of binary digital information.
  • each frame of the image can be used to represent a set of information sequences, thereby comparing to a conventional static
  • the QR code can further significantly increase its data encoding density.
  • an optical label can be imaged using an optical imaging device or an image capture device that is common in the art, and the transmitted information, such as a binary data 1 or a data 0 information sequence, is determined from each frame of image to achieve light.
  • the optical imaging device or image acquisition device may include an image acquisition component, a processor, a memory, and the like.
  • the optical imaging device or image acquisition device may be, for example, a mobile terminal having a photographing function, including a mobile phone, a tablet, smart glasses, etc., which may include an image capture device and an image processing module.
  • the user visually finds the optical tag within a range of distance from the optical tag, and scans the optical tag by performing the information capture and interpretation process by causing the imaging sensor of the mobile terminal to face the optical tag.
  • the controller of the optical tag controls the light source to change the attribute of the light emitted by the light source at a certain frequency
  • the image acquisition frequency of the mobile terminal can be set to be greater than or equal to twice the frequency of the attribute conversion of the light source.
  • the process of identifying and decoding can be completed by performing a decoding operation on the acquired image frame.
  • the serial number, the check digit, the time stamp, and the like may be included in the information transmitted by the optical tag.
  • a start frame or an end frame may be given in a plurality of image frames as needed, or both, for indicating a start or end position of a complete period of the plurality of image frames, the start frame or the end frame may be It is set to display a particular combination of data, for example: all 0s or all 1s, or any special combination that will not be the same as the information that may actually be displayed.
  • CMOS imaging device when a continuous multi-frame image of a light source is captured by a CMOS imaging device, it can be controlled by a controller such that a switching time interval between operating modes of the light source is equal to a full frame imaging time of the CMOS imaging device. Length, thereby achieving frame synchronization of the light source with the imaging device. Assuming that each light source transmits 1 bit of information per frame, for a shooting speed of 30 frames per second, each light source can deliver 30 bits of information per second, with an encoding space of 2 30 , which can include, for example, an initial Frame tag (frame header), optical tag ID, password, verification code, URL information, address information, time stamp, or a different combination thereof.
  • Table 1 presents an example packet structure in accordance with one embodiment of the present invention:
  • optical tags Compared with the traditional two-dimensional code, the above optical label transmits information by emitting different light, which has the advantages of long distance, visible light condition requirement, strong directivity, and positionability, and the information transmitted by the optical label can be timed. It changes rapidly, which can provide a large information capacity. Therefore, optical tags have greater information interaction capabilities, which can provide great convenience for users and businesses. In order to provide corresponding services to users and merchants based on optical tags, each optical tag is assigned a unique identifier (ID) for uniquely identifying or identifying by the manufacturer, manager, user, etc. of the optical tag. Light label.
  • ID unique identifier
  • the identifier can be issued by the optical tag, and the user can use the image capturing device or the imaging device built in the mobile phone to perform image acquisition on the optical tag to obtain information (such as an identifier) transmitted by the optical tag, so that the access can be based on The service provided by the optical label.
  • the imaging device that scans the optical tag based on the aforementioned optical tag.
  • the geographical location information of the optical tag can be registered in advance on, for example, a server.
  • the optical label can transmit its identification information (for example, ID information) during the working process, and the imaging device can obtain the ID information by scanning the optical label. After the imaging device obtains the ID information of the optical label, the ID information is used to query the server. The geographic location corresponding to the optical tag can be obtained, thereby performing reverse positioning to determine the specific location of the imaging device.
  • optical tag can have a uniform or default physical size or shape and the user device can be aware of the physical size or shape.
  • Various possible reverse positioning methods can be used to determine the relative positional relationship between the user (actually the user's imaging device) and the optical tag. For example, it can be determined by determining the relative distance of the imaging device from the optical tag (eg, by the imaging size of the optical tag, or by any application with a ranging function on the handset) and using three or more optical tags by triangulation. The relative positional relationship between the imaging device and any of the optical tags. The relative positional relationship between the imaging device and the optical tag can also be determined by determining the relative distance of the imaging device from the optical tag and by analyzing the perspective distortion of the imaging of the optical tag on the imaging device.
  • physical size information and/or orientation information or the like of the optical label may be further used.
  • the physical size information and/or orientation information may be stored in the server in association with the identification information of the optical tag.
  • At least two optical tags can be used for positioning.
  • the following steps can be performed for each optical tag:
  • Step 1 Collect the ID information of the optical tag using the imaging device.
  • Step 2 Obtain physical size information and geographic location information of the optical label by using the ID information query.
  • Step 3 Photograph the optical label by using the default focal length of the imaging device to obtain an image of the optical label. Since the default focal length of the imaging device is used, the captured optical label image may be blurred.
  • Step 4 Adjust and optimize the focal length of the imaging device to obtain a clear image of the optical tag. For example, based on the default focal length, it is first attempted to increase the focal length, and if the optical label image becomes clear, the focal length continues to increase, and if the optical label image becomes blurred, the opposite direction is adjusted, that is, the focal length is reduced; and vice versa.
  • the texture feature of the optical label image can be extracted. The clearer the optical label image is, the simpler the corresponding texture information is, the smaller the density of the texture is. Therefore, according to the light
  • the density of the texture of the label image determines the optimal focal length parameter. When a smaller texture density cannot be obtained after multiple iterations, the image with the smallest texture density can be considered to be a clear image and will be minimized.
  • the focal length parameter corresponding to the texture density is used as the optimal focal length parameter.
  • Step 5 Based on the optimal focal length parameter, take a clear image of the optical label, and then, using a simple lens object image formula and object image relationship, according to the size of the clear image of the optical label, the physical size of the optical label, and the optimal focal length parameter Calculate the relative distance between the imaging device and the optical tag.
  • the specific position information of the imaging device that is, the specific coordinates of the imaging device in the physical world coordinate system, can be determined using the triangulation method.
  • 1 is a schematic diagram of a triangulation method in which two optical tags (optical tag 1 and optical tag 2) are used for triangulation.
  • two candidate locations are typically obtained. In this case, it may be necessary to choose from these two candidate locations.
  • one of the candidate locations may be selected in conjunction with positioning information (eg, GPS information) of the imaging device (eg, the handset) itself. For example, a candidate location that is closer to the GPS information can be selected.
  • the orientation information of each optical tag may be further considered, the orientation information actually defining an area in which the optical tag can be observed, and thus one of the candidate locations may be selected based on the orientation information.
  • the orientation information of the optical tag can also be stored in the server and can be obtained by querying the ID information of the optical tag.
  • two optical tags have been described as an example, but those skilled in the art can understand that the above-described method based on the triangular positioning can also be applied to the case of three or more optical tags. In fact, using three or more optical tags allows for more precise positioning, and multiple candidate points typically do not occur.
  • the following reverse positioning method can also be employed, which does not require the use of at least two optical tags, but can be reverse positioned using one optical tag.
  • the method of this embodiment comprises the following steps:
  • Step 1 Collect the ID information of the optical tag using the imaging device.
  • Step 2 Querying the ID information to obtain the geographical location information of the optical tag and related information of multiple points on the same.
  • the related information is, for example, position information of these points on the optical tag and their coordinate information.
  • Step 3 Photograph the optical label by using the default focal length of the imaging device to obtain an image of the optical label.
  • the optimum focal length parameter can be determined according to the density of the texture of the optical label image. When a smaller texture density cannot be obtained after multiple iterations, the image with the smallest texture density can be considered as a clear image. And the focal length parameter corresponding to the obtained minimum texture density is taken as the optimal focal length parameter.
  • Step 5 Based on the optimal focal length parameter, take a clear image of the optical label to achieve the reverse positioning as described below:
  • FIG. 2 is a schematic diagram of an imaging process of an optical label on an imaging device.
  • the object coordinate system (X, Y, Z) is established with the centroid of the optical label as the origin, and the image coordinate system (x, y, z) is established with the position F c of the imaging device as the origin, and the object coordinate system is also called the physical world coordinate system.
  • the image coordinate system is also called the camera coordinate system.
  • a point of the upper left corner of the image of the optical tag collected by the imaging device is used as a coordinate origin, and a two-dimensional coordinate system (u, v) is established in the image plane of the optical tag, which is called an image plane coordinate system, and the image plane and the light are called
  • the intersection of the axis ie, the Z axis
  • (c x , c y ) is the coordinate of the main point in the image plane coordinate system.
  • the coordinates of any point P on the optical label in the object coordinate system are (X, Y, Z), the corresponding image point is q, and the coordinates in the image coordinate system are (x, y, z), in the image.
  • the coordinates in the plane coordinate system are (u, v).
  • the image coordinate system has not only the change of displacement with respect to the object coordinate system, but also the rotation of the angle, the relationship between the object coordinate system (X, Y, Z) and the image coordinate system (x, y, z). It can be expressed as:
  • f x and f y are the focal lengths of the imaging device in the x-axis and y-axis directions, respectively, c x , c y are the coordinates of the main point in the image plane coordinate system, f x , f y , c x , c y
  • f x , f y , c x , c y For the parameters inside the imaging device, it can be detected in advance.
  • the rotation matrix R and the displacement vector t respectively represent attitude information of the object coordinate system relative to the image coordinate system (ie, the attitude of the imaging device relative to the optical label, that is, the deflection of the optical axis of the imaging device compared to the optical axis, also referred to as the imaging device relative
  • the displacement information ie, the displacement between the imaging device and the optical tag.
  • the rotation can be decomposed into two-dimensional rotations around their respective axes, if they are rotated around the x, y, and z axes in turn, And ⁇ , then the total rotation matrix R is three matrices R x ( ⁇ ), The product of R z ( ⁇ ), namely: among them,
  • the displacement vector t can be simply written as follows, ie
  • s is the object image conversion factor, which is equal to the ratio of the size of the image plane to the resolution of the imaging device, and is also known.
  • Determining the points in the optical label based on the information about the plurality of points (eg, at least four points A, B, C, and D) on the optical label obtained in step two (eg, the position information of the points on the optical label) Image points in the image, such as A', B', C', and D'.
  • the four points A, B, C, and D may, for example, be respectively the left and right sides of the optical tag, or may be four separate point sources located at the four corners of the optical tag, and the like.
  • the rotation matrix R determines the attitude of the imaging device relative to the optical tag.
  • an optical tag based information device interaction control system is also provided.
  • An information device in the system refers to any computing device that can be interactively controlled over a network, including but not limited to information appliances or home devices.
  • Each information device can be associated with one or more optical tags, and each optical tag can be associated with one or more information devices.
  • the optical tag can be placed on the information device or can be at a relatively fixed location relative to the information device. The physical location of the optical tag and the position of the information device relative to the optical tag are pre-calibrated. Information about the optical tag and its associated information device can be saved on the server for query.
  • the information related to the optical tag may include, for example, ID information of the optical tag, physical world coordinates of the optical tag, physical size, orientation, information device identifier associated with the optical tag, and multiple points on the optical tag on the optical tag Information such as location information and its object coordinates.
  • the information relating to the information device may, for example, comprise an identifier of the information device, a coordinate in the object coordinate system established by the information device with the centroid of the optical tag associated therewith, the optical tag of the information device with which it is associated Relative location information, an operational interface of the information device, descriptive information, size, orientation, etc. of the information device.
  • the image device of the optical device associated with the information device can be used to obtain the ID information of the optical tag by using an imaging device of the terminal device (such as a mobile phone).
  • the terminal device may obtain information about the information device associated with the optical tag from the server according to the optical tag ID information, and may present the interaction interface of the information device at a location where the information device is located in the display screen of the terminal device. . In this way, the user can perform related interactive control operations on the information device through an interactive interface superimposed on or near the information device.
  • the interaction interface of the information device Before the interaction interface of the information device is presented at the location currently displayed on the screen of the terminal device, it may be determined whether any information device associated with the optical tag will appear on the display screen of the terminal device. And further determining the imaging position of the information device on the display screen in the case where the judgment result is present, for example, determining the two-dimensional image plane coordinates when the screen is imaged.
  • the reverse positioning method mentioned above may first be used to determine an initial relative positional relationship between the terminal device carried by the user and the optical tag, thereby determining the initial position and initial orientation of the user's terminal device. Further, since the physical position of the optical tag and the position of each information device relative to the optical tag have been previously calibrated, the terminal device and each of the user may be determined based on the initial position of the terminal device and the pre-stored calibration information. The initial relative positional relationship between information devices.
  • the user equipment and the information device Based on the initial relative positional relationship between the user equipment and the information device and the initial orientation of the terminal device, it may be determined whether any information device currently associated with the optical tag is present on the display screen of the terminal device, and The imaging position of the information device on the display screen is further determined in the case where the judgment result is an appearance. If the information device that the user wishes to control does not appear in the current display screen, the user can move the terminal device from the initial location to enable the information device to appear in the display screen, for example, the user can pan or rotate the terminal device such that Its camera eventually faces the information device.
  • the change in the position and posture of the terminal device can be detected by various existing methods (for example, monitoring by sensors such as an acceleration sensor and a gyroscope built in the terminal device) to determine Based on the location information and the orientation information, the location information and the orientation information of the mobile terminal device can determine which information devices currently appear on the display screen of the terminal device and their respective presentation positions. Further, the interactive interfaces of these information devices can be superimposed on their respective imaging locations on the display screen to achieve WYSIWYG interactions for the various information devices.
  • the physical world coordinate system (X, Y, Z) established with the centroid of the optical tag as the origin and the imaging device are located.
  • the position is established at the origin.
  • the camera coordinate system (x, y, z) (for example, formula (1)), which can be described by rotating the matrix R and the displacement vector t.
  • the transformation relationship between the physical world coordinates and the image plane coordinates (for example, formula (3)) is also determined, and the transformation relationship is also called For the projection relationship, it can be used to determine the projected position of the actual object at a certain position in the physical world coordinate system in the imaging plane.
  • the specificity of the imaging device relative to the optical tag is determined based on the optical tag image acquired by the imaging device and the information related to the optical tag acquired from the server.
  • the rotation matrix R and the displacement vector t in the equation (3) and the internal parameters of the imaging device have been determined, whereby the image plane coordinates of each information device can be determined by the formula. Since the physical position of the optical tag and the relative position between the optical tag and the information device are set in advance, the object coordinate of each information device in the physical world coordinate system can be determined by the relative position between the optical tag and the information device.
  • the image plane coordinate of the information device in the imaging plane can be obtained by substituting it into the formula (3), and then the interactive interface of the information device can be presented on the terminal device screen based on the image plane coordinate of the information device for the user to use.
  • the icon of the information device may also be superimposed on the information device on the terminal device screen for the user to select.
  • the interactive interface of the information device is Presented on the terminal device screen for the user to operate and control the information device. If there is a occlusion between the icons, you can either translucent the front icon or use a numeric cue near the top icon to indicate that there are multiple icons overlaid at that location.
  • Terminal devices can monitor changes in their position and posture in a variety of ways. For example, the terminal device may use the optical tag as a reference point, compare the image captured by the current imaging device with the previous image, identify the difference in the calculated image, thereby forming feature points, and use these feature points to calculate the change of the position and posture of the self.
  • a terminal device such as a mobile phone can estimate the position and orientation of the camera in the real world over time by the value measured by the built-in acceleration sensor, gyroscope or the like. Then, the rotation matrix R and the displacement vector t are adjusted based on the current position and orientation of the terminal device, and then the current image plane coordinates of the information device are reacquired to present related icons or interfaces on the screen.
  • the user can configure and operate the information device through the interactive interface displayed on the screen of the terminal device.
  • the manner of operation of the information device such as voice control or gesture control, may be predefined.
  • the operation mode of the information device is configured as voice control
  • the terminal device detects the voice input and performs voice recognition, converts the received voice into an operation command, and sends a control command to the network through the network.
  • the information device operates on it.
  • the gesture of the user may be captured by the imaging device of the terminal device or the camera device installed in the environment of the user, and gesture recognition is performed on the terminal device to convert it into a corresponding
  • the operation instruction is sent and sent through the network to control the related information device.
  • Gestures associated with each information device operation may be pre-defined, for example, gestures associated with operating the light may include turning the palms on to turn the lights on, the fists to turn off the lights, the fingers to increase the brightness, and the fingers down to reduce the brightness.
  • FIG. 4 is a flow chart showing an optical tag-based information device interaction method according to an embodiment of the present invention.
  • the initial position and posture of the terminal device relative to the optical tag are determined by performing image acquisition on the optical tag at a relatively fixed position of the information device by the terminal device carried by the user in step S1). For example, by using various reverse positioning methods described above, an initial relative positional relationship between an imaging device that performs image acquisition and an optical tag can be obtained by image acquisition of an optical tag, thereby determining an initial position and initiality of the terminal device. Orientation.
  • step S3 Determining, according to the initial position of the terminal device and the pre-calibrated position of the respective information device relative to the optical tag as mentioned above, the relative position between the terminal device and the respective information device, and then determining in step S3) The imaging position of each information device on the display screen.
  • the physical position of the optical tag and the relative position between the optical tag and the information device are set in advance, in accordance with the light collected by the imaging device.
  • the rotation matrix R and the displacement vector t in the formula (3) and the internal parameters of the imaging device have been determined, It is thus possible to determine the image plane coordinates of each information device by the formula. Therefore, the object coordinates of each information device in the physical world coordinate system can be determined by the relative position between the optical tag and the information device, and the image plane coordinates of the information device in the imaging plane can be obtained by substituting it into the formula (3).
  • the interactive interfaces of the respective information devices can then be superimposed to their imaging locations on the display screen for interaction with the respective information devices, respectively, in step S4).
  • the user can move the terminal device from the initial location to enable the information device to appear in the display screen, for example, the user can pan or rotate
  • the terminal device is such that its camera is ultimately oriented towards the information device.
  • the terminal device moves from the initial position, detecting a change in the position and posture of the terminal device, thereby determining location information and posture information of the moved terminal device, and based on the location information and the posture information, determining which information devices are currently available Appear on the display screen of the terminal device and their respective presentation positions.
  • the interactive interfaces of these information devices can be superimposed on their presentation locations on the display screen to achieve WYSIWYG interactions for the various information devices.
  • the terminal device can monitor changes in its position and posture in various ways.
  • the terminal device may use the optical tag as a reference point, compare the image captured by the current imaging device with the previous image, identify the difference in the calculated image, thereby forming feature points, and use these feature points to calculate the change of the position and posture of the self.
  • a terminal device such as a mobile phone can estimate the position and orientation of the camera in the real world over time by the value measured by the built-in acceleration sensor, gyroscope or the like. Then, the rotation matrix R and the displacement vector t are adjusted based on the current position and orientation of the terminal device, and then the current image plane coordinates of the information device are reacquired to present related icons or interfaces on the screen.
  • the method may further comprise identifying a user's operation of the interactive interface of the information device, converting the operation to a corresponding operational command, and transmitting it to the information device over the network.
  • the information device can perform the corresponding operations in response to the received related operational instructions.
  • Users can interact with information devices in a variety of ways. For example, the user can configure and operate the information device through the interactive interface of the screen display of the terminal device, for example, using a touch screen input or using a keyboard input.
  • the manner of operation of the information device such as voice control or gesture control, may be predefined.
  • the operation mode of the information device is configured as voice control
  • the terminal device detects the voice input and performs voice recognition, converts the received voice into an operation command, and sends a control command to the network through the network.
  • the information device operates on it.
  • the operation mode of the information device is configured as the handheld control
  • the gesture of the user may be captured by the imaging device of the terminal device or the camera device installed in the environment of the user, and gesture recognition is performed on the terminal device to convert it into a corresponding
  • the operation instruction is sent and sent through the network to control the related information device.
  • Gestures associated with each information device operation may be pre-defined, for example, gestures associated with operating the light may include turning the palms on to turn the lights on, the fists to turn off the lights, the fingers to increase the brightness, and the fingers down to reduce the brightness.
  • any optical tag (or light source) that can be used to communicate information can be used.
  • the method of the present invention can be applied to a light source that transmits information through different stripes based on a rolling shutter effect of CMOS (for example, the optical communication device described in Chinese Patent Publication No. CN104168060A), and can also be used in, for example, the patent CN105740936A.
  • CMOS complementary metal-oxide-semicon-based on a rolling shutter effect of CMOS
  • the described optical tags can also be applied to a variety of optical tags that can be used to identify the transmitted information by the CCD sensor, or can be applied to an array of optical tags (or sources).
  • appearances of the phrases “in the various embodiments”, “in some embodiments”, “in one embodiment”, or “in an embodiment” are not necessarily referring to the same implementation. example.
  • the particular features, structures, or properties may be combined in any suitable manner in one or more embodiments.
  • the particular features, structures, or properties shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or properties of one or more other embodiments without limitation, as long as the combination is not Logical or not working.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Communication System (AREA)

Abstract

一种基于光标签的信息设备交互方法及系统,通过终端设备对处于信息设备相对固定位置处的光标签进行图像采集来确定终端设备相对于光标签的初始位置和姿态;接着再结合预先标定好的各个信息设备相对于光标签的位置来确定终端设备与各个信息设备之间的相对位置;进而获取各信息设备在所述终端设备的显示屏幕上的成像位置,然后将各信息设备的交互接口分别呈现于其在显示屏幕上的成像位置处,以用于与各个信息设备进行交互操作。这使得用户能随时随地操控视野中的信息设备,并且对于设备的交互操作所见即所得。

Description

基于光标签的信息设备交互方法及系统 技术领域
本发明涉及光信息技术和位置服务领域,更具体地涉及利用光标签进行设备交互的方法和系统。在本文中,光标签也称为光通信装置。
背景技术
随着移动互联网、物联网技术、大数据等技术的不断发展,智能家居产业迎来了快速的发展,涌现了很多具备数字化、网络化、智能化功能的信息家电设备,这些信息家电设备可以互相联网,并且还可以通过网络来进行交互控制。随着手机等智能便携设备的普及,越来越多的智能家居系统采用手机来帮助用户控制家电设备,这样只要用户的手机能够连接网络,就能随时随地地通过网络进行控制家里的用电设备。然而当电器很多时,用户需要在手机上进行浏览和不断的选择,这样的操作繁琐,容易让用户产生排斥心理。
发明内容
针对上述问题,本发明提供了一种新的基于光标签的信息设备交互方法及系统,使得用户能随时随地操控视野中的设备,并且对于设备的交互操作所见即所得。
本发明的目的是通过以下技术方案实现的:
一方面,本发明提供了一种基于光标签的信息设备交互的方法,该方法包括:
S1)通过用户携带的终端设备对处于信息设备相对固定位置处的光标签进行图像采集来确定终端设备相对于光标签的初始位置和姿态;
S2)基于所确定的终端设备的位置和预先标定好的各个信息设备相对于光标签的位置来确定终端设备与各个信息设备之间的相对位置;
S3)根据所确定的终端设备的姿态及其与各个信息设备之间的相对位置计算各信息设备在所述终端设备的显示屏幕上的成像位置;
S4)将各信息设备的交互接口分别呈现于其在显示屏幕上的成像位置处,以用于与各个信息设备进行交互操作。
在上述方法中,还可包括:响应于终端设备的位置和/或姿态的变化,调整各信息设备在所述终端设备的显示屏幕上的成像位置。
在上述方法中,还可包括:
识别用户对于信息设备的交互接口的操作;以及
将所识别的操作转换为相应的操作指令,并通过网络将其发送给该信息设备。
在上述方法中,用户对于信息设备的交互接口的操作可以为下列中的至少一个:屏幕输入、键盘输入、语音输入或手势输入。
又一方面,本发明还提供了一种基于光标签的信息设备交互的系统,该系统包括一个或多个信息设备、处于信息设备相对固定位置处的光标签、用于存储与信息设备和光标签有关信息的服务器、带有成像装置的终端设备;
其中终端设备被配置为:
对处于要访问的信息设备相对固定位置处的光标签进行图像采集来确定终端设备相对于光标签的初始位置和姿态;
基于所确定的终端设备的位置和从服务器获取的预先标定好的各个信息设备相对于光标签的位置来确定终端设备与各个信息设备之间的相对位置;
根据所确定的终端设备的姿态及其与各个信息设备之间的相对位置计算各信息设备在所述终端设备的显示屏幕上的成像位置;
将各信息设备的交互接口分别呈现于其在显示屏幕上的成像位置处,以用于与各个信息设备进行交互操作。
在上述系统中,所述终端设备还可被配置为:响应于终端设备的位置和/或姿态的变化,调整各信息设备在所述终端设备的显示屏幕上的成像位置。
在上述系统中,所述终端设备还可被配置为:
识别用户对于信息设备的交互接口的操作;以及
将所识别的操作转换为相应的操作指令,并通过网络将其发送给该信息设备。
在上述系统中,用户对于信息设备的交互接口的操作可以为下列中的至少一个:屏幕输入、键盘输入、语音输入或手势输入。
本发明还涉及一种计算设备,包括处理器和存储器,所述存储器中存 储有计算机程序,所述计算机程序在被所述处理器执行时能够用于实现上述方法。
本发明还涉及一种存储介质,其中存储有计算机程序,所述计算机程序在被执行时能够用于实现上述方法。
附图说明
以下参照附图对本发明实施例作进一步说明,其中:
图1为三角定位方法的基本原理示意图;
图2为采集光标签时成像设备的成像过程的原理示意图;以及
图3为物坐标系与像坐标系之间的简化关系示意图;
图4为根据本发明实施例的基于光标签的信息设备交互方法的流程示意图。
具体实施方式
为了使本发明的目的,技术方案及优点更加清楚明白,以下结合附图通过具体实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
条形码和二维码已经被广泛采用来对信息进行编码。当用特定设备或软件扫描这些条形码和二维码时,相应的信息就会被识别出来。然而,条形码和二维码的识别距离很受限制。例如,对于二维码而言,当用手机摄像头对其进行扫描时,该手机通常必须置于一个比较近的距离内,该距离通常只是二维码的宽度的15倍左右。因此,对于远距离识别(例如相当于二维码宽度的200倍的距离),条形码和二维码通常不能实现,或者必须定制非常大的条形码和二维码,但这会带来成本的提升,并且在许多情形下由于其他各种限制是不可能实现的。
而光标签通过发出不同的光来传递信息,其具有远距、可见光条件要求宽松、指向性强、可定位的优势,并且光标签所传递的信息可以随时间迅速变化,从而可以提供更大的信息容量(例如在中国专利公开CN104168060A、CN105740936A等中所描述的光通信装置)。相比于传统的二维码,光标签具有更强的信息交互能力,从而可以为用户和商家提供巨大的便利性。
在本发明的实施例中,光标签可以是能够通过发出不同的光来传输不 同的信息的任一光通信装置。在一个实施例中,光标签可包括至少一个光源和控制器,控制器用于控制所述光源发出的不同的光来传递不同的信息。例如,控制器可以通过改变光源发出的光的属性来使得光源发出不同的光。光的属性可以是光学成像器件(例如CMOS成像器件)能够感知的任何属性;例如其可以是光的强度、颜色、波长等人眼可感知的属性,也可以是人眼不可感知的其他属性,例如在人眼可见范围外的电磁波长的强度、颜色或波长改变,或者是上述属性的任一组合。因此,光的属性变化可以是单个属性发生变化,也可以是两个或更多个属性的组合发生变化。当选择光的强度作为属性时,可以简单地通过选择开启或关闭光源来实现。在下文中为了简单起见,以开启或关闭光源来改变光的属性,但本领域技术人员可以理解,用于改变光的属性的其他方式也是可行的。
在该光标签中可以使用各种形式的光源,只要其某一可被光学成像器件感知的属性能够以不同频率进行变化即可。光源中可以包括各种常见的光学器件,例如导光板、柔光板、漫射器等。例如,光源可以是一个LED灯、由多个LED灯构成的阵列、显示屏幕或者其中的一部分,甚至光的照射区域(例如光在墙壁上的照射区域)也可以作为光源。该光源的形状可以是各种形状,例如圆形、正方形、矩形、条状、L状等。
在一个实施例中,该光标签的控制器可以控制每个光源发出的光的属性,以便传递信息。例如,可以通过控制每个光源的开启和关闭来表示二进制数字信息的“0”或“1”,从而该光标签中多个光源可以用于表示一个二进制数字信息序列。如本领域技术人员可以理解的,每个光源不仅可以用于表示一个二进制数,还可以用于表示三进制或更大进制的数据。例如,可以通过将光源所发出的光的强度设置为从三种或更多种水平中进行选择,或者通过将光源所发出的光的颜色设置为从三种或更多种颜色中进行选择,甚至通过采用强度与颜色的组合,来使得每个光源能表示三进制或更大进制的数据。因此,相比于传统二维码,本发明的光标签可以显著提高数据编码密度。
在又一实施例中,该光标签的控制器可以控制光源以一定频率改变其所发出的光的属性,因此,本发明的光标签可以在不同的时间表示不同的数据信息,例如,不同的二进制数字信息序列。如此,当使用光学成像设备对本发明的光标签进行连续拍摄时(例如,以30帧/秒的速率),其每一帧图像都可以用于表示一组信息序列,从而相比于传统的静态二维码,可 以进一步显著地提高其数据编码密度。
在本申请的实施例中,可以使用本领域常见的光学成像设备或图像采集设备对光标签进行成像,从每帧图像确定所传递的信息,例如二进制数据1或数据0信息序列,从而实现光标签向光学成像器件的信息传递。光学成像设备或图像采集设备可以包括图像采集元件、处理器和存储器等。光学成像设备或图像采集设备例如可以是具有拍摄功能的移动终端,包括手机、平板电脑、智能眼镜等,其可以包括图像采集装置和图像处理模块。用户在距离光标签视距范围内通过肉眼发现光标签,通过使移动终端成像传感器朝向光标签,扫描该光标签并进行信息捕获与判读处理。当光标签的控制器控制光源以一定频率改变其所发出的光的属性时,移动终端的图像采集频率可以被设置为大于或等于光源的属性变换频率的2倍。通过对所采集的图像帧进行解码操作,可以完成识别解码的过程。在一个实施例中,为了避免图像帧的重复、遗漏等,可以在光标签所传递的信息中包括序列号、校验位、时间戳等。根据需要,可以在多个图像帧中给出起始帧或结束帧,或者二者兼有,用于指示多个图像帧的一个完整周期的开始或结束位置,该起始帧或结束帧可以被设定为显示某个特殊的数据组合,例如:全0或全1,或者任何不会与实际可能显示的信息相同的特殊组合。
以CMOS成像器件为例,当通过CMOS成像器件拍摄光源的连续的多帧图像时,可以通过控制器进行控制,使得光源的工作模式之间的切换时间间隔等于CMOS成像器件一个完整帧成像的时间长度,从而实现光源与成像器件的帧同步。假定每个光源每帧传输1比特的信息,那么对于30帧/每秒的拍摄速度,每个光源每秒钟可以传递30比特的信息,编码空间达到2 30,该信息可以包括例如,起始帧标记(帧头)、光标签的ID、口令、验证码、网址信息、地址信息、时间戳或其不同的组合等等。可以按照结构化方法,设定上述各种信息的顺序关系,形成数据包结构。每接收到一个完整的该数据包结构,视为获得一组完整数据(一个数据包),进而可以对其进行数据读取和校验分析。表1给出根据本发明的一个实施例的示例数据包结构:
表1
帧头 属性字段(可选) 数据字段 校验位 帧尾
相比于传统的二维码,上述光标签通过发出不同的光来传递信息,其具有远距、可见光条件要求宽松、指向性强、可定位的优势,并且光标签 所传递的信息可以随时间迅速变化,从而可以提供大的信息容量。因此,光标签具有更强的信息交互能力,从而可以为用户和商家提供巨大的便利性。为了基于光标签向用户和商家提供对应的服务,每个光标签都分配有唯一标识符(ID),该标识符用以由光标签的制造者、管理者及使用者等唯一地识别或标识光标签。通常,可由光标签发布其标识符,而使用者可以使用例如手机上内置的图像采集设备或成像装置对光标签进行图像采集来获得该光标签传递的信息(例如标识符),从而可以访问基于该光标签提供的服务。
在本发明的实施例中还可以基于前述光标签对于扫描该光标签的成像设备进行精确定位(也可以称为反向定位或相对定位)。例如,可以预先在例如服务器上注册光标签的地理位置信息。光标签在工作过程中可以传递其标识信息(例如ID信息),成像设备可以通过对光标签进行扫描来获得该ID信息,当成像设备获得了光标签的ID信息后,使用ID信息查询服务器,就可以获得光标签所对应的地理位置,从而进行反向定位以确定成像设备的具体位置。可选地,还可以预先在服务器上注册光标签的其他相关信息,例如物理尺寸信息、物理形状信息和/或朝向信息。在一个实施例中,光标签可以具有统一或默认的物理尺寸或形状并且用户设备可以知悉该物理尺寸或形状。
可以使用各种可行的反向定位方法来确定用户(实际是用户的成像设备)与光标签之间的相对位置关系。例如,可以通过确定成像设备与光标签的相对距离(例如通过光标签的成像大小,或者通过手机上任何具有测距功能的应用),并使用两个或更多个光标签通过三角定位来确定成像设备与任一光标签之间的相对位置关系。也可以通过确定成像设备与光标签的相对距离并通过分析光标签在成像设备上成像的透视变形来确定成像设备与光标签之间的相对位置关系。在确定成像设备与光标签之间的相对位置关系时,可以进一步使用光标签的物理尺寸信息和/或朝向信息等。该物理尺寸信息和/或朝向信息可以与光标签的标识信息相关联地存储于服务器。
例如,在一个实施例中,可使用至少两个光标签来进行定位。可针对每一个光标签执行如下步骤:
步骤一:使用成像设备采集光标签的ID信息。
步骤二:通过该ID信息查询获得光标签的物理尺寸信息和地理位置信息。
步骤三:采用成像设备的默认焦距对光标签进行拍照,以获得光标签的图像。由于采用的是成像设备的默认焦距,因此拍摄到的光标签图像可能会比较模糊。
步骤四:调节并优化成像设备的焦距,以获得光标签的清晰图像。例如,可以基于默认焦距,首先尝试增大焦距,如果光标签图像变清晰,就继续增大焦距,如果光标签图像变得模糊,就反方向调节,即减小焦距;反之亦然。在焦距调节过程中,为了确定光标签图像的清晰度,可以对光标签图像进行纹理特征提取,光标签图像越清晰,所对应的纹理信息越简单,纹理的密度越小,因此,可以根据光标签图像的纹理的密度来确定最优的焦距参数,当经过多次迭代后不能获得更小的纹理密度时,可以认为具有最小的纹理密度的图像是清晰的图像,并将与所获得的最小的纹理密度对应的焦距参数作为最优的焦距参数。
步骤五:基于最优的焦距参数,拍摄光标签的清晰图像,然后,利用简单的透镜物象公式和物像关系,根据光标签的清晰图像的尺寸、光标签的物理尺寸和最优的焦距参数计算成像设备与光标签的相对距离。
在获得了成像设备与至少两个光标签中的每一个的相对距离后,可以利用三角定位法确定成像设备的具体位置信息,也即成像设备在物理世界坐标系中的具体坐标。图1为三角定位方法的示意图,其中使用了两个光标签(光标签1和光标签2)进行三角定位。
另外,当使用两个光标签进行三角定位时,通常会获得两个候选位置。在这种情况下,可能需要从这两个候选位置中进行选择。在一个实施方式中,可以结合成像设备(例如,手机)本身的定位信息(例如,GPS信息)来选择其中一个候选位置。例如,可以选择与GPS信息更为接近的一个候选位置。在另一个实施方式中,可以进一步考虑各个光标签的朝向信息,该朝向信息实际上限定了可以观察到光标签的区域,因此,可以基于该朝向信息来选择其中一个候选位置。光标签的朝向信息同样可以存储于服务器,并可以通过光标签的ID信息来查询获得。在上述实施例中以两个光标签为例进行了说明,但本领域技术人员可以理解,上述基于三角定位的 方法同样可以适用于三个或更多个光标签的情形。实际上,使用三个或更多个光标签可以实现更为精确的定位,并且通常不会出现多个候选点。
在又一个实施例中,还可以采用下面的反向定位方法,该实施例并不需要使用至少两个光标签,而是可以使用一个光标签进行反向定位。该实施例的方法包括如下步骤:
步骤一:使用成像设备采集光标签的ID信息。
步骤二:通过该ID信息查询获得光标签的地理位置信息以及其上的多个点的相关信息。该相关信息例如是这些点在光标签上的位置信息以及它们的坐标信息。
步骤三:采用成像设备的默认焦距对光标签进行拍照,以获得光标签的图像。例如上文介绍的可以根据光标签图像的纹理的密度来确定最优的焦距参数,当经过多次迭代后不能获得更小的纹理密度时,可以认为具有最小的纹理密度的图像是清晰的图像,并将与所获得的最小的纹理密度对应的焦距参数作为最优的焦距参数。
步骤五:基于最优的焦距参数,拍摄光标签的清晰图像,实现如下文所介绍的反向定位:
参见图2,图2为光标签在成像设备上的成像过程示意图。以光标签的质心为原点建立物坐标系(X,Y,Z),以成像设备所在的位置F c为原点建立像坐标系(x,y,z),物坐标系也称物理世界坐标系,像坐标系也称为相机坐标系。另外,以成像设备所采集的光标签的图像左上角的点为坐标原点,在光标签的像平面内建立二维坐标系(u,v),称为像平面坐标系,该像平面与光轴(即Z轴)的交点为主点,(c x,c y)为主点在像平面坐标系中的坐标。光标签上的任意一点P在物坐标系中的坐标为(X,Y,Z),所对应的像点为q,其在像坐标系中的坐标为(x,y,z),在像平面坐标系中的坐标为(u,v)。在成像过程中,像坐标系相对于物坐标系不只有位移的改变,还有角度的旋转,物坐标系(X,Y,Z)和像坐标系(x,y,z)之间的关系可以表示为:
Figure PCTCN2019085997-appb-000001
定义变量:x′=x/z,y′=y/z;
那么,像平面坐标系中的坐标:
u=f x*x′+c x且v=f y*y′+c y    (2);
其中,f x和f y分别为成像设备在x轴和y轴方向的焦距,c x,c y为主点在像平面坐标系中的坐标,f x、f y、c x、c y都为成像设备内部的参数,可以提前测知。旋转矩阵R和位移矢量t分别表示物坐标系相对于像坐标系的姿态信息(即成像设备相对于光标签的姿态,就是成像设备的中轴线相比光标签的偏向,也称为成像设备相对于光标签的朝向,例如,当成像设备正对光标签时,R=0)和位移信息(即成像设备与光标签之间的位移)。在三维空间中,旋转可以分解为绕各自坐标轴的二维旋转,如果依次绕x,y,z轴旋转角度ψ,
Figure PCTCN2019085997-appb-000002
和θ,那么总的旋转矩阵R是三个矩阵R x(ψ),
Figure PCTCN2019085997-appb-000003
R z(θ)的乘积,即:
Figure PCTCN2019085997-appb-000004
其中,
Figure PCTCN2019085997-appb-000005
Figure PCTCN2019085997-appb-000006
Figure PCTCN2019085997-appb-000007
为了简单起见,并因为是本领域公知的,在此不再展开计算,仅简单地将旋转矩阵写成如下形式:
Figure PCTCN2019085997-appb-000008
而位移矢量t可以简单地写成如下形式,即
Figure PCTCN2019085997-appb-000009
于是得到如下关系式:
Figure PCTCN2019085997-appb-000010
其中,s为物像转换因子,等于像平面的大小与成像设备分辨率的比值,也是已知的。
根据在步骤二获得的在光标签上的多个点(例如至少四个点A、B、C和D)的相关信息(例如,这些点在光标签上的位置信息)确定这些点在光标签图像中的像点,例如A’、B’、C’和D’。这四个点A、B、C和D例如可以分别光标签的左右两侧,或者可以是位于光标签的四个角的四个单独的点光源等等。该四个点的坐标信息(X A,Y A,Z A)、(X B,Y B,Z B)、(X C,Y C,Z C)和(X D,Y D,Z D)也在上述步骤二中被获得。通过测量对应的四个像点A’、B’、C’和D’在像平面坐标系中的坐标(u A’,v A’)、(u B’,v B’)、(u C’,v C’)和(u D’,v D’),代入上述关系式(3),求解得到旋转矩阵R和位移矢量t,于是就得到了物坐标系(X,Y,Z)和像坐标系(x,y,z)之间的关系。基于该关系,就可以得到成像设备相对于光标签的姿态信息和位移信息,从而实现对成像设备的定位。图3简化示出了物坐标系与像坐标系之间的关系。然后,根据在上述步骤二中获得的光标签的地理位置信息,借由旋转矩阵R和位移矢量t,就可以计算出成像设备的实际具体位置和姿态,通过位移矢量t确定成像设备的具体位置,旋转矩阵R确定成像设备相对于光标签的姿态。
在本发明的一个实施例中,还提供了基于光标签的信息设备交互控制系统。在该系统中的信息设备是指能通过网络对其进行交互控制的任意计算装置,包括但不限于信息家电设备或家居设备。每个信息设备可以关联有一个或多个光标签,而每个光标签可以关联一个或多个信息设备。光标签可以设置在该信息设备上,或者可以处于与该信息设备相对固定位置处。光标签的物理位置以及信息设备相对于光标签的位置等都预先进行了标定。光标签及其关联的信息设备的有关信息可以保存在服务器上以供查询。与光标签相关的信息例如可包括光标签的ID信息、光标签的物理世界坐标、物理尺寸、朝向、该光标签关联的信息设备标识符、在光标签上 的多个点在光标签上的位置信息及其物坐标等信息。与信息设备相关的信息例如可包括该信息设备的标识符、该信息设备在以与其关联的光标签的质心为原点所建立的物坐标系中的坐标、该信息设备与其所关联的光标签的相对位置信息、该信息设备的操作接口、该信息设备的描述信息、尺寸、朝向等等。
当用户希望与其视野中的某个信息设备交互时,可以使用其随身携带的终端设备(例如手机等)的成像装置对与该信息设备关联的光标签进行图像采集来获取该光标签的ID信息。接着终端设备可根据该光标签ID信息从服务器获取与光标签关联的信息设备有关的信息,并可以将该信息设备的交互接口呈现于该信息设备在终端设备的显示屏幕中所处的位置处。这样,用户就可以通过叠加在该信息设备上或其附近的交互接口来对该信息设备进行相关交互控制操作。优选地,可以在将信息设备的交互接口呈现于该信息设备在终端设备屏幕上当前显示的位置处之前,先当判断任一与该光标签关联的信息设备是否会出现于终端设备的显示屏幕上,并在判断结果为出现的情况下再进一步确定该信息设备在显示屏幕上的成像位置,例如确定其屏幕成像时的二维的像平面坐标。
为了实现上述目的,首先可以使用上文提及的反向定位方法来确定用户携带的终端设备与光标签之间的初始相对位置关系,从而确定用户的终端设备的初始位置和初始朝向。进一步地,由于已经预先对光标签的物理位置以及各个信息设备相对于光标签的位置进行了标定,因此,可以基于终端设备的初始位置以及预先存储的标定信息,来确定用户的终端设备与各个信息设备之间的初始相对位置关系。基于用户的终端设备与各个信息设备之间的初始相对位置关系以及终端设备的初始朝向,可以确定出当前任一与该光标签关联的信息设备是否会出现于终端设备的显示屏幕上,并在判断结果为出现的情况下进一步确定该信息设备在显示屏幕上的成像位置。如果用户希望控制的信息设备不会出现在当前的显示屏幕中,用户可以从所述初始位置移动终端设备以使得该信息设备能够出现在显示屏幕中,例如,用户可以平移或旋转终端设备以使得其摄像头最终朝向该信息设备。当终端设备从所述初始位置发生移动时,可以通过现有的多种方式来检测终端设备位置和姿态的变化(例如通过终端设备上内置的加速度 传感器、陀螺仪等传感器来监测),从而确定移动后的终端设备的位置信息和朝向信息,基于该位置信息和朝向信息,可以确定当前哪些信息设备会出现于终端设备的显示屏幕上以及它们各自的呈现位置。进而,可以将这些信息设备的交互接口分别叠加到它们在显示屏幕上的成像位置处,以实现对于各个信息设备的所见即所得的交互操作。
在一个实施例中,如上文介绍的,当使用成像设备对光标签进行图像采集时,以光标签的质心为原点建立的物理世界物坐标系(X,Y,Z)和以成像设备所在的位置为原点建立相机坐标系(x,y,z)之间存在一定的变换关系(例如公式(1)),可以旋转矩阵R和位移矢量t来描述。而相机坐标系与在相机屏幕上所采集的光标签的图像左上角的点为坐标原点的二维像平面坐标系(u,v)之间也存在一定的变换关系(例如公式(2)),其由成像设备内部的参数来决定。由此,在确定了旋转矩阵R和位移矢量t以及成像设备内部参数之后,物理世界坐标与像平面坐标之间的变换关系(例如公式(3))也会随之确定,该变换关系也称为投影关系,可以用来确定处于物理世界坐标系某个位置处的实际物体在成像平面中的投影位置。
如上文结合反向定位方法及公式(1)-(3)所介绍的,在根据成像装置采集的光标签图像以及从服务器获取的与光标签有关的信息来确定成像设备相对于光标签的具体位置和朝向的过程中,公式(3)中旋转矩阵R和位移矢量t以及成像设备内部参数已经确定,由此可以通过来该公式确定每个信息设备的像平面坐标。由于光标签的物理位置以及光标签与信息设备之间的相对位置是提前设定的,因此可以通过光标签与信息设备之间的相对位置确定每个信息设备在物理世界坐标系中的物坐标,将其代入公式(3)就可以得到该信息设备在成像平面中的像平面坐标,接着可基于信息设备的像平面坐标在终端设备屏幕上呈现该信息设备的交互接口以供用户使用。在又一个实施例中,也可以在终端设备屏幕上将该信息设备的图标叠加信息设备之上以供用户进行选择,当用户点击图标来选中要操作的信息设备时将该信息设备的交互接口呈现在终端设备屏幕上以供用户操作和控制该信息设备。如果图标之间出现互相遮挡的情况,可以将前方图标半透明化,或者在最前面图标附近使用数字提示的方式来指示该位 置处重叠有多个图标。
在该系统中,随着用户携带终端设备的移动,终端设备的成像位置在物理世界中相对于光标签和信息设备的位置和朝向都会发生变化,在终端设备成像平面上出现的信息设备的位置也会发生变化。因此需要实时检测终端设备的位置和姿态,以及时调整对上述的旋转矩阵R和位移矢量t来确保获取信息设备准确的像平面坐标。终端设备可以通过多种方式来监测自身位置和姿态的变化。例如,终端设备可以光标签为参照点,将当前成像装置捕获的图像与之前的图像进行对比,识别计算图像中的差异,从而形成特征点,并使用这些特征点计算自身位置和姿态的改变。又例如,诸如手机之类的终端设备可通过内置的加速度传感器、陀螺仪等惯性测量传感器测量的值来估算其摄像头随着时间推移,在现实世界中位置和朝向的变化。接着,基于终端设备当前的位置和朝向对旋转矩阵R和位移矢量t进行调整,然后重新获取信息设备当前的像平面坐标以在屏幕上呈现相关图标或接口。
当用户选中了需要进行操作的信息设备之后,可以通过多种方式来与信息设备进行交互。例如,用户可以通过终端设备的屏幕显示的交互接口进行配置和操作信息设备。又例如,可以预先定义信息设备的操作方式,例如语音控制或手势控制。在信息设备的操作方式被配置为语音控制的情况下,当用户选择该信息设备后,终端设备检测语音输入并进行语音识别,将收到的语音转换为操作指令,通过网络将控制指令发送给信息设备以对其进行操作。在信息设备的操作方式被配置为手持控制时,可以通过终端设备的成像装置或者安装在用户周围环境中的摄像装置来捕获用户的手势,在终端设备上进行手势识别以将其转换为相应的操作指令,并通过网络发送该操作指令以控制相关信息设备。与每个信息设备操作相关的手势可预先定义,例如与操作灯相关的手势可包括手掌开为开灯,握拳为关灯,手指上划为增强亮度,手指下划为减弱亮度。
图4给出了在本发明的一个实施例的基于光标签的信息设备交互方法的流程示意图。在步骤S1)通过用户携带的终端设备对处于信息设备相对固定位置处的光标签进行图像采集来确定终端设备相对于光标签的初始位置和姿态。例如,采用上文介绍的多种反向定位方法,通过对光标签进 行图像采集可获取进行图像采集的成像装置与光标签之间的初始相对位置关系,从而确定该终端设备的初始位置和初始朝向。在步骤S2)基于该终端设备的初始位置以及如上文提及的预先标定的各个信息设备相对于光标签的位置,确定终端设备与各个信息设备之间的相对位置,并接着在步骤S3)确定各信息设备在显示屏幕上的成像位置。在如上文结合反向定位方法及公式(1)-(3)所介绍的,光标签的物理位置以及光标签与信息设备之间的相对位置是提前设定的,在根据成像装置采集的光标签图像以及从服务器获取的与光标签有关的信息来确定成像设备相对于光标签的具体位置和姿态的过程中,公式(3)中旋转矩阵R和位移矢量t以及成像设备内部参数已经确定,由此可以通过来该公式确定每个信息设备的像平面坐标。因此可以通过光标签与信息设备之间的相对位置确定每个信息设备在物理世界坐标系中的物坐标,将其代入公式(3)就可以得到该信息设备在成像平面中的像平面坐标。然后在步骤S4)可以将各信息设备的交互接口分别叠加到其在显示屏幕上的成像位置处,以用于与各个信息设备进行交互操作。
在又一个实施例中,如果用户希望控制的信息设备不在当前的显示屏幕中,用户可以从所述初始位置移动终端设备以使得该信息设备能够出现在显示屏幕中,例如,用户可以平移或旋转终端设备以使得其摄像头最终朝向该信息设备。当终端设备从所述初始位置发生移动时,检测终端设备位置和姿态的变化,从而确定移动后的终端设备的位置信息和姿态信息,基于该位置信息和姿态信息,可以确定当前哪些信息设备会出现于终端设备的显示屏幕上以及它们各自的呈现位置。进而,可以将这些信息设备的交互接口分别叠加到它们在显示屏幕上的呈现位置处,以实现对于各个信息设备的所见即所得的交互操作。其中终端设备可以通过多种方式来监测自身位置和姿态的变化。例如,终端设备可以光标签为参照点,将当前成像装置捕获的图像与之前的图像进行对比,识别计算图像中的差异,从而形成特征点,并使用这些特征点计算自身位置和姿态的改变。又例如,诸如手机之类的终端设备可通过内置的加速度传感器、陀螺仪等惯性测量传感器测量的值来估算其摄像头随着时间推移,在现实世界中位置和朝向的变化。接着,基于终端设备当前的位置和朝向对旋转矩阵R和位移矢量t 进行调整,然后重新获取信息设备当前的像平面坐标以在屏幕上呈现相关图标或接口。
在又一个实施例中,该方法还可包括识别用户对于信息设备的交互接口的操作,将该操作转换为相应的操作指令,并通过网络将其发送给信息设备。信息设备可响应于收到的相关操作指令来执行相应的操作。用户可以通过多种方式来与信息设备进行交互。例如,用户可以通过终端设备的屏幕显示的交互接口进行配置和操作信息设备,例如可利用触摸屏输入或者利用键盘输入。又例如,可以预先定义信息设备的操作方式,例如语音控制或手势控制。在信息设备的操作方式被配置为语音控制的情况下,当用户选择该信息设备后,终端设备检测语音输入并进行语音识别,将收到的语音转换为操作指令,通过网络将控制指令发送给信息设备以对其进行操作。在信息设备的操作方式被配置为手持控制时,可以通过终端设备的成像装置或者安装在用户周围环境中的摄像装置来捕获用户的手势,在终端设备上进行手势识别以将其转换为相应的操作指令,并通过网络发送该操作指令以控制相关信息设备。与每个信息设备操作相关的手势可预先定义,例如与操作灯相关的手势可包括手掌开为开灯,握拳为关灯,手指上划为增强亮度,手指下划为减弱亮度。
在本发明的实施例中,可以使用任何能够用于传递信息的光标签(或光源)。例如,本发明的方法可以适用于基于CMOS的滚动快门效应而通过不同的条纹来传递信息的光源(例如在中国专利公开CN104168060A中所描述的光通信装置),也可以使用于如专利CN105740936A中所描述的光标签,也可以适用于各种能通过CCD感光器件来识别所传递的信息的光标签,或者也可以适用于光标签(或光源)的阵列。
本文中针对“各个实施例”、“一些实施例”、“一个实施例”、或“实施例”等的参考指代的是结合所述实施例所描述的特定特征、结构、或性质包括在至少一个实施例中。因此,短语“在各个实施例中”、“在一些实施例中”、“在一个实施例中”、或“在实施例中”等在整个本文中各处的出现并非必须指代相同的实施例。此外,特定特征、结构、或性质可以在一个或多个实施例中以任何合适方式组合。因此,结合一个实施例中所示出或描述的特定特征、结构或性质可以整体地或部分地与一个或多个其他 实施例的特征、结构、或性质无限制地组合,只要该组合不是非逻辑性的或不能工作。本文中出现的类似于“根据A”或“基于A”的表述意指非排他性的,也即,“根据A”可以涵盖“仅仅根据A”,也可以涵盖“根据A和B”,除非特别声明或者根据上下文明确可知其含义为“仅仅根据A”。在本申请中为了清楚说明,以一定的顺序描述了一些示意性的操作步骤,但本领域技术人员可以理解,这些操作步骤中的每一个并非是必不可少的,其中的一些步骤可以被省略或者被其他步骤替代。这些操作步骤也并非必须以所示的方式依次执行,相反,这些操作步骤中的一些可以根据实际需要以不同的顺序执行,或者并行执行,只要新的执行方式不是非逻辑性的或不能工作。
虽然本发明已经通过优选实施例进行了描述,然而本发明并非局限于这里所描述的实施例,在不脱离本发明范围的情况下还包括所做出的各种改变以及变化。

Claims (10)

  1. 一种基于光标签的信息设备交互的方法,该方法包括:
    S1)通过用户携带的终端设备对光标签进行图像采集来确定终端设备相对于光标签的初始位置和姿态;
    S2)基于所确定的终端设备的位置和预先标定好的各个信息设备相对于光标签的位置来确定终端设备与各个信息设备之间的相对位置;
    S3)根据所确定的终端设备的姿态及其与各个信息设备之间的相对位置计算各信息设备在所述终端设备的显示屏幕上的成像位置;
    S4)将各信息设备的交互接口分别呈现于其在显示屏幕上的成像位置处,以用于与各个信息设备进行交互操作。
  2. 根据权利要求1所述的方法,还包括:响应于终端设备的位置和/或姿态的变化,调整各信息设备在所述终端设备的显示屏幕上的成像位置。
  3. 根据权利要求1所述的方法,还包括:
    识别用户对于信息设备的交互接口的操作;以及
    将所识别的操作转换为相应的操作指令,并通过网络将其发送给该信息设备。
  4. 根据权利要求3所述的方法,其中用户对于信息设备的交互接口的操作为下列中的至少一个:屏幕输入、键盘输入、语音输入或手势输入。
  5. 一种基于光标签的信息设备交互的系统,该系统包括一个或多个信息设备、处于信息设备相对固定位置处的光标签、用于存储与信息设备和光标签有关信息的服务器、带有成像装置的终端设备;
    其中终端设备被配置为:
    对光标签进行图像采集来确定终端设备相对于光标签的初始位置和姿态;
    基于所确定的终端设备的位置和从服务器获取的预先标定好的各个信息设备相对于光标签的位置来确定终端设备与各个信息设备之间的相对位置;
    根据所确定的终端设备的姿态及其与各个信息设备之间的相对位置计算各信息设备在所述终端设备的显示屏幕上的成像位置;
    将各信息设备的交互接口分别呈现于其在显示屏幕上的成像位置处,以用于与各个信息设备进行交互操作。
  6. 根据权利要求5所述的系统,其中所述终端设备还被配置为:响应于终端设备的位置和/或姿态的变化,调整各信息设备在所述终端设备的显示屏幕上的成像位置。
  7. 根据权利要求5所述的系统,其中所述终端设备还被配置为:
    识别用户对于信息设备的交互接口的操作;以及
    将所识别的操作转换为相应的操作指令,并通过网络将其发送给该信息设备。
  8. 根据权利要求7所述的系统,其中用户对于信息设备的交互接口的操作为下列中的至少一个:屏幕输入、键盘输入、语音输入或手势输入。
  9. 一种计算设备,包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序在被所述处理器执行时能够用于实现权利要求1-4中任一项所述的方法。
  10. 一种存储介质,其中存储有计算机程序,所述计算机程序在被执行时能够用于实现权利要求1-4中任一项所述的方法。
PCT/CN2019/085997 2018-05-09 2019-05-08 基于光标签的信息设备交互方法及系统 WO2019214641A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020207035360A KR20210008403A (ko) 2018-05-09 2019-05-08 광학 라벨을 기반으로 하는 정보 기기의 상호 작용 방법 및 시스템
JP2021512990A JP7150980B2 (ja) 2018-05-09 2019-05-08 光ラベルに基づく情報デバイスインタラクション方法及びシステム
EP19799577.2A EP3792711A4 (en) 2018-05-09 2019-05-08 INTERACTION METHOD AND SYSTEM OF AN INFORMATION DEVICE BASED ON OPTICAL LABELS
US17/089,711 US11694055B2 (en) 2018-05-09 2020-11-04 Optical tag based information apparatus interaction method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810435183.8A CN110471580B (zh) 2018-05-09 2018-05-09 基于光标签的信息设备交互方法及系统
CN201810435183.8 2018-05-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/089,711 Continuation US11694055B2 (en) 2018-05-09 2020-11-04 Optical tag based information apparatus interaction method and system

Publications (1)

Publication Number Publication Date
WO2019214641A1 true WO2019214641A1 (zh) 2019-11-14

Family

ID=68468452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085997 WO2019214641A1 (zh) 2018-05-09 2019-05-08 基于光标签的信息设备交互方法及系统

Country Status (7)

Country Link
US (1) US11694055B2 (zh)
EP (1) EP3792711A4 (zh)
JP (1) JP7150980B2 (zh)
KR (1) KR20210008403A (zh)
CN (1) CN110471580B (zh)
TW (1) TWI696960B (zh)
WO (1) WO2019214641A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112051919B (zh) * 2019-06-05 2022-10-18 北京外号信息技术有限公司 一种基于位置的交互方法和交互系统
CN111162840B (zh) * 2020-04-02 2020-09-29 北京外号信息技术有限公司 用于设置光通信装置周围的虚拟对象的方法和系统
TWI738318B (zh) * 2020-05-05 2021-09-01 光時代科技有限公司 用於確定光通信裝置的成像區域的系統
TWI756963B (zh) * 2020-12-03 2022-03-01 禾聯碩股份有限公司 目標物件之區域定義辨識系統及其方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104168060A (zh) 2014-07-09 2014-11-26 珠海横琴华策光通信科技有限公司 一种利用可见光信号传输信息/获取信息的方法和装置
US20150025838A1 (en) * 2011-11-15 2015-01-22 Panasonic Corporation Position estimation device, position estimation method, and integrated circuit
CN105718840A (zh) * 2016-01-27 2016-06-29 西安小光子网络科技有限公司 一种基于光标签的信息交互系统及方法
CN105740936A (zh) 2014-12-12 2016-07-06 方俊 一种光标签和识别光标签的方法及设备
CN107703872A (zh) * 2017-10-31 2018-02-16 美的智慧家居科技有限公司 家电设备的终端控制方法、装置及终端
CN107784414A (zh) * 2016-08-31 2018-03-09 湖南中冶长天节能环保技术有限公司 一种生产过程参数管理系统

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3904036B2 (ja) * 1996-05-17 2007-04-11 株式会社安川電機 多指多関節ハンドの制御装置
US20110001651A1 (en) * 2009-07-02 2011-01-06 Candelore Brant L Zero standby power laser controlled device
US8413881B2 (en) * 2010-02-22 2013-04-09 Into Great Companies, Inc. System of receiving prerecorded media discs from users
US8447070B1 (en) * 2010-04-19 2013-05-21 Amazon Technologies, Inc. Approaches for device location and communication
JP2012037919A (ja) * 2010-08-03 2012-02-23 Doshisha 拡張現実感技術を利用した情報家電操作システム
JP5257437B2 (ja) * 2010-10-20 2013-08-07 コニカミノルタビジネステクノロジーズ株式会社 携帯端末及び処理装置の操作方法
JP2012156836A (ja) * 2011-01-27 2012-08-16 Seiko Epson Corp リモコン装置及びプログラム
JP6066037B2 (ja) * 2012-03-27 2017-01-25 セイコーエプソン株式会社 頭部装着型表示装置
WO2014103161A1 (ja) * 2012-12-27 2014-07-03 パナソニック株式会社 情報通信方法
JP2014139745A (ja) * 2013-01-21 2014-07-31 Shimizu Corp 機器管理システム、機器管理装置、機器管理方法及びプログラム
US9696703B2 (en) * 2013-05-18 2017-07-04 Fipak Research And Development Company Method and apparatus for ensuring air quality in a building, including method and apparatus for controlling a working device using a handheld unit having scanning, networking, display and input capability
WO2015008102A1 (en) * 2013-07-19 2015-01-22 Niss Group Sa System and method for indentifying and authenticating a tag
CN103823204B (zh) * 2014-03-10 2015-03-11 北京理工大学 一种基于可见光标签的室内定位方法
US20210203994A1 (en) * 2015-06-12 2021-07-01 Shaoher Pan Encoding data in a source image with watermark image codes
US9838844B2 (en) * 2015-09-25 2017-12-05 Ca, Inc. Using augmented reality to assist data center operators
JP2017130047A (ja) * 2016-01-20 2017-07-27 沖電気工業株式会社 情報処理装置、情報処理システム、及びプログラム
CN105740375A (zh) * 2016-01-27 2016-07-06 西安小光子网络科技有限公司 一种基于多个光标签的信息推送系统及方法
CN106446883B (zh) * 2016-08-30 2019-06-18 西安小光子网络科技有限公司 基于光标签的场景重构方法
CN106339488B (zh) * 2016-08-30 2019-08-30 西安小光子网络科技有限公司 一种基于光标签的虚拟设施插入定制实现方法
CN106446737B (zh) * 2016-08-30 2019-07-09 西安小光子网络科技有限公司 一种多个光标签的快速识别方法
CN106372556B (zh) * 2016-08-30 2019-02-01 西安小光子网络科技有限公司 一种光标签的识别方法
CN106408667B (zh) * 2016-08-30 2019-03-05 西安小光子网络科技有限公司 基于光标签的定制现实方法
US10284293B2 (en) * 2016-09-23 2019-05-07 Qualcomm Incorporated Selective pixel activation for light-based communication processing
WO2018069952A1 (ja) * 2016-10-11 2018-04-19 株式会社オプティム 遠隔制御システム、遠隔制御方法、およびプログラム
CN107368805A (zh) * 2017-07-17 2017-11-21 深圳森阳环保材料科技有限公司 一种基于智能家居系统电器识别的遥控器
CN107734449B (zh) * 2017-11-09 2020-05-12 陕西外号信息技术有限公司 一种基于光标签的室外辅助定位方法、系统及设备
US10841174B1 (en) * 2018-08-06 2020-11-17 Apple Inc. Electronic device with intuitive control interface

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150025838A1 (en) * 2011-11-15 2015-01-22 Panasonic Corporation Position estimation device, position estimation method, and integrated circuit
CN104168060A (zh) 2014-07-09 2014-11-26 珠海横琴华策光通信科技有限公司 一种利用可见光信号传输信息/获取信息的方法和装置
CN105740936A (zh) 2014-12-12 2016-07-06 方俊 一种光标签和识别光标签的方法及设备
CN105718840A (zh) * 2016-01-27 2016-06-29 西安小光子网络科技有限公司 一种基于光标签的信息交互系统及方法
CN107784414A (zh) * 2016-08-31 2018-03-09 湖南中冶长天节能环保技术有限公司 一种生产过程参数管理系统
CN107703872A (zh) * 2017-10-31 2018-02-16 美的智慧家居科技有限公司 家电设备的终端控制方法、装置及终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3792711A4

Also Published As

Publication number Publication date
JP7150980B2 (ja) 2022-10-11
US11694055B2 (en) 2023-07-04
TWI696960B (zh) 2020-06-21
CN110471580B (zh) 2021-06-15
US20210056370A1 (en) 2021-02-25
TW201947457A (zh) 2019-12-16
KR20210008403A (ko) 2021-01-21
JP2021524119A (ja) 2021-09-09
CN110471580A (zh) 2019-11-19
EP3792711A4 (en) 2022-01-26
EP3792711A1 (en) 2021-03-17

Similar Documents

Publication Publication Date Title
WO2019214641A1 (zh) 基于光标签的信息设备交互方法及系统
US9600078B2 (en) Method and system enabling natural user interface gestures with an electronic system
US9310891B2 (en) Method and system enabling natural user interface gestures with user wearable glasses
US9727298B2 (en) Device and method for allocating data based on an arrangement of elements in an image
CN102783041B (zh) 通信装置以及通信方法
US9900500B2 (en) Method and apparatus for auto-focusing of an photographing device
JP2003256876A (ja) 複合現実感表示装置及び方法、記憶媒体、並びにコンピュータ・プログラム
KR20170050995A (ko) 디스플레이 장치 및 그의 영상 표시 방법
US20170038912A1 (en) Information providing device
CN104081307A (zh) 图像处理装置、图像处理方法和程序
WO2017147909A1 (zh) 目标设备的控制方法和装置
CN112699849A (zh) 手势识别方法和装置、电子设备、可读存储介质和芯片
CN111553196A (zh) 检测隐藏摄像头的方法、系统、装置、以及存储介质
JP2011054162A (ja) 対話型情報操作システム及びプログラム
US20170302908A1 (en) Method and apparatus for user interaction for virtual measurement using a depth camera system
US20160117553A1 (en) Method, device and system for realizing visual identification
CN106657600B (zh) 一种图像处理方法和移动终端
KR20190035373A (ko) 혼합 현실에서의 가상 모바일 단말 구현 시스템 및 이의 제어 방법
JP2018018308A (ja) 情報処理装置、及びその制御方法ならびにコンピュータプログラム
CN112529770B (zh) 图像处理方法、装置、电子设备和可读存储介质
WO2020244577A1 (zh) 一种基于位置的交互方法和交互系统
US10969865B2 (en) Method for transmission of eye tracking information, head mounted display and computer device
JP2023519755A (ja) 画像レジストレーション方法及び装置
JP2016139396A (ja) ユーザーインターフェイス装置、方法およびプログラム
WO2019214644A1 (zh) 基于光标签网络的辅助识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19799577

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021512990

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20207035360

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019799577

Country of ref document: EP

Effective date: 20201209