WO2016070827A1 - Procédé et appareil permettant d'émettre et de transmettre des informations de reconnaissance, et système de reconnaissance d'informations - Google Patents

Procédé et appareil permettant d'émettre et de transmettre des informations de reconnaissance, et système de reconnaissance d'informations Download PDF

Info

Publication number
WO2016070827A1
WO2016070827A1 PCT/CN2015/093896 CN2015093896W WO2016070827A1 WO 2016070827 A1 WO2016070827 A1 WO 2016070827A1 CN 2015093896 W CN2015093896 W CN 2015093896W WO 2016070827 A1 WO2016070827 A1 WO 2016070827A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
identification information
identified object
coordinates
information
Prior art date
Application number
PCT/CN2015/093896
Other languages
English (en)
Chinese (zh)
Inventor
刘海军
罗圣美
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2016070827A1 publication Critical patent/WO2016070827A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • Embodiments of the present invention relate to, but are not limited to, computer vision technology, and more particularly to a method and apparatus for distributing and transmitting identification information and an information recognition system.
  • the development of digital multimedia and networks has enriched the entertainment experience in people's daily lives.
  • the current technology allows people to watch HDTV at home.
  • the source of TV programs may come from digital discs, cable TV, the Internet, etc., and can experience stereo, 5.1-channel, 7.1-channel and even more realistic sound effects, and people can Using a tablet (PAD), mobile terminal to achieve these experiences, people can also transfer digital content between different devices over the network, and control the playback of a device through remote control, gestures, for example, using gesture control to switch One channel, the next channel, and so on.
  • PID tablet
  • the traditional method of controlling multiple devices is to control the devices by using the respective remote controllers of the devices, and these remote controllers are often not universal, and most of the remote controllers do not have network functions, such as traditional televisions and stereos.
  • There are also some remote controllers that support the network such as loading software that supports interworking protocols on devices with computing and network capabilities (such as mobile terminals, PADs, etc.) to control another device.
  • Gesture control is a relatively novel control method, that is, the camera on one device monitors the gesture and analyzes it and converts it into control of the device; or the user uses the wearable device through the hand and the arm. Wearing a ring, watch, vest, etc. on the body and on the body To identify user actions to achieve control of the device.
  • Related technologies and some products enable users to manipulate gestures using gestures. For example, by adding a camera to the television, collecting and recognizing the user's gesture, and then following the correspondence between the predefined gesture and the manipulation command, thereby achieving the effect of manipulating the television through the gesture, the currently implemented manipulation includes changing the channel. , change the volume, etc.
  • the gesture recognition devices currently on the market and in the laboratory are each equipped with a camera, and it is often necessary to input a "boot indication" to the gesture recognition device, such as a specific gesture, or other control commands.
  • a "boot indication” to the gesture recognition device, such as a specific gesture, or other control commands.
  • the related technology Universal Plug and Play (UPnP) technology specifies how to send and receive network messages between devices to implement discovery and control.
  • the technology uses network addresses and As the identifier of the device, digital coding is a kind of machine identification.
  • the final control requires the user to select and operate according to the machine identification of the device.
  • a gesture recognition device capable of recognizing a device including its position, communication identification parameters, and issuing recognized gesture information; when using gesture manipulation, the controlled device issues a gesture that can be processed: gesture number, gesture text Describe information and digital graphic information.
  • Embodiments of the present invention provide a method and apparatus for distributing and transmitting identification information and an information recognition system capable of sharing identification information.
  • the embodiment of the invention provides a method for issuing identification information, including:
  • the first device issues identification information; the identification information includes at least: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the method further includes:
  • the first device captures an image, and the identified object is identified according to the captured image to obtain the identification information.
  • the coordinates of the identified object in the first coordinate system are coordinates of one or more points on the identified object in the first coordinate system.
  • the identification information further includes one or more of the following information:
  • the distance between the identified object and the first device, the network address or communication port of the identified object, the identified object identifier, and the first device identifier is the distance between the identified object and the first device, the network address or communication port of the identified object, the identified object identifier, and the first device identifier.
  • the second device is the identification object
  • the coordinates of the second device in the first coordinate system are coordinates of the identification object in the first coordinate system
  • the first device issues the identification information in a manner of broadcasting, or multicasting, or unicasting, or responding to a query request.
  • the embodiment of the invention further provides a method for transmitting identification information, including:
  • the second device receives the identification information; the identification information includes at least: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the method further includes:
  • the second device calculates coordinates of the identified object in the second coordinate system according to the identification information.
  • the calculating, by the second device, the coordinates of the identified object in the second coordinate system according to the identification information includes:
  • the second device calculates a conversion matrix according to coordinates of the second device in the first coordinate system and coordinates of the second device in the second coordinate system, and calculates the recognized according to the conversion matrix and the identification information.
  • the coordinates of the object in the second coordinate system is a conversion matrix according to coordinates of the second device in the first coordinate system and coordinates of the second device in the second coordinate system, and calculates the recognized according to the conversion matrix and the identification information.
  • identification information further includes one or more of the following information:
  • the method further includes:
  • the second device communicates with the identified object according to the calculated or received coordinates, network address or communication port of the identified object in the second coordinate system.
  • An embodiment of the present invention further provides an apparatus for issuing identification information, including at least:
  • the first network unit is configured to issue identification information; the identification information includes at least: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the device further includes:
  • the first visual recognition unit is configured to capture an image, and identify the identified object according to the captured image to obtain the identification information.
  • the first network unit is further configured to:
  • the identification information is published in a broadcast, or multicast, or unicast, or in response to a query request.
  • An embodiment of the present invention further provides an apparatus for transmitting identification information, including at least:
  • the second network unit is configured to receive the identification information; the identification information includes at least: coordinates of the identified object in the first coordinate system, and the first coordinate system information.
  • the device further includes:
  • the second visual recognition unit is configured to calculate coordinates of the identified object in the second coordinate system based on the identification information.
  • the second visual recognition unit is configured to:
  • the device further includes:
  • a storage module configured to save one or more of the following: coordinates of one or more points on the identified object in a first coordinate system, network address or communication port of the identified object, Identify the object identifier, the first device identifier.
  • An embodiment of the present invention further provides an information recognition system, including at least:
  • a first device configured to issue identification information;
  • the identification information at least: coordinates of the identified object in the first coordinate system, first coordinate system information;
  • the second device is configured to receive the identification information.
  • the first device is further configured to:
  • the image is captured, and the identified object is identified based on the captured image to obtain the identification information.
  • the first device is further configured to: publish the identification information in a manner of broadcasting, or multicasting, or unicasting, or responding to a query request.
  • the second device is further configured to:
  • the second device is configured to:
  • the second device is further configured to:
  • the embodiment of the present invention includes: the first device issues identification information; the identification information includes at least: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the first device issues the identification information of the identified object, and shares the identification information of the identified object, so that the other device can identify the identified object by receiving the issued identification information, regardless of whether Identify if the object is within its capture range.
  • FIG. 1 is a flowchart of a method for distributing identification information according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for transmitting identification information according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an apparatus for distributing identification information according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an apparatus for transmitting identification information according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an information recognition system according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a device scenario according to an embodiment of the present invention.
  • Figure 7 is a schematic diagram of a device unit according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of a method according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of message interaction according to an embodiment of the present invention.
  • the gesture recognition device in the related art does not publish and share the location information of the other device, the recognized location information of the other device, and the gestures that are recognized, the gesture instructions are processed by the gesture recognition device. Identify confusion about command input.
  • an embodiment of the present invention provides a method for distributing identification information, including:
  • Step 100 The first device issues the obtained identification information.
  • the first device may issue the identification information in a manner of broadcasting, or multicasting, or unicasting, or responding to the query request.
  • the identification information includes at least coordinates of the identified object in the first coordinate system and first coordinate system information.
  • the first coordinate system information refers to coordinate description information of the first coordinate system, such as a Cartesian coordinate system or a polar coordinate system, a coordinate origin, a distance ratio (scale bar), a device located at or indicating the coordinate origin, and the like.
  • the identified object may be a second device, and the coordinates of the identified object in the first coordinate system may be coordinates of the second device in the first coordinate system.
  • the coordinate of the identified object in the first coordinate system may be the coordinate of the one or more points (such as the center point) on the identified object in the first coordinate system; the second device is in the first coordinate system.
  • the coordinates may be coordinates of one or more points (eg, center points) on the second device in the first coordinate system.
  • the coordinates of the identified object or the second device in the first coordinate system can be obtained by using a depth camera.
  • the identification information may further include one or more of the following:
  • the distance between the identified object and the first device, the network address or communication port of the identified object, the identified object identifier, and the first device identifier is the distance between the identified object and the first device, the network address or communication port of the identified object, the identified object identifier, and the first device identifier.
  • the coordinate origin of the first coordinate system may be the first device, or may be other points in the space.
  • the identified object identifier may be a visual recognition feature of the recognized object, and the visual recognition feature may be one or more of the following: a barcode label, a two-dimensional code label, a text label, a trademark logo, and the like.
  • the identified object may be one or more of a television, a player, a storage server, a computer, a stereo, a speaker, a projector, a set top box, a car, a machine tool, a ship, and the like.
  • the first device may send the identification information to the second device by means of broadcast or multicast.
  • UDP Universal Plug and Play
  • mDNS Multicast Domain Name System
  • DNS-SD Domain Name System Based Service Discovery
  • the network address or the communication port of the first device that obtains the identified object may be sent by the identified object to the first device.
  • the first device issues the identification information of the identified object, and shares the identification information of the identified object, so that the other device can identify the identified object by receiving the posted identification information, regardless of whether Identify if the object is within its capture range.
  • the method also includes:
  • Step 101 The first device captures an image, and identifies the identified object according to the captured image to obtain the identification information.
  • the step includes: the first device identifying the identified object to obtain the identification information, or receiving the query request.
  • the first device identifying the identified object to obtain the identification information includes:
  • the first device collects the visual recognition feature of the identified object, and identifies the identified object according to the collected visual recognition feature to obtain the identification information.
  • the first device collects the visual recognition features of the identified object, including:
  • the first device may receive a broadcast or multicast message from the identified object; wherein the broadcast or multicast message includes a visually recognizable feature of the identified device; or
  • the first device sends a query broadcast or multicast message to the identified object and receives a response message from the identified object; wherein the response message includes a visually recognized feature of the identified device.
  • identifying the object to be identified according to the collected visual identification feature comprises obtaining identification information, including:
  • the images or video segments within the visual range are acquired by the depth camera, and the collected visual recognition features are matched in the acquired image or video segment to obtain the coordinates of the identified object in the first coordinate system.
  • the method further includes:
  • Step 102 The first device sends the identification information of all the identified objects to the second device.
  • the first device may send the identification information to the second device by means of broadcast, multicast, or unicast.
  • an embodiment of the present invention provides a method for receiving or transmitting identification information, including:
  • Step 200 The second device receives the identification information.
  • the identification information includes: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the identification information may further include coordinates of the second device in the first coordinate system.
  • the foregoing first coordinate system information may also be a default configuration to the second device.
  • the coordinate origin of the first coordinate system may be the first device, or may be other points in the space.
  • the identification information may further include one or more of the following:
  • the coordinates of the identified object in one or more points eg, one or more tangent planes, or one or more points on the contour
  • the network of the identified object Address or communication port e.g., one or more tangent planes, or one or more points on the contour
  • the method also includes:
  • Step 201 The second device sends a query request.
  • the method also includes:
  • Step 202 The second device calculates coordinates of the identified object in the second coordinate system according to the identification information.
  • the second device calculates a conversion matrix according to the coordinates of the second device in the first coordinate system and the coordinates of the second device in the second coordinate system, according to the calculated conversion matrix and the identification information (such as being identified therein)
  • the coordinates of the object in the first coordinate system calculate the coordinates of the identified object in the second coordinate system.
  • the transformation matrix is the product of the coordinate vector of the second device in the second coordinate system and the inverse vector of the coordinate vector of the second device in the first coordinate system.
  • the coordinates of the identified object in the second coordinate system are the product of the coordinates of the identified object in the first coordinate system and the transformation matrix.
  • Step 203 The second device saves the identified object identifier in the identification information and the calculated coordinates of the identified object in the second coordinate system.
  • the second device may also save one or more of the following:
  • the coordinates of the one or more points on the identified object eg, one or more tangent planes, or one or more points on the contour
  • the network address of the identified object e.g., one or more tangent planes, or one or more points on the contour
  • the method also includes:
  • Step 204 The second device communicates with the identified object according to the calculated or received coordinates of the identified object in the second coordinate system, the network address of the identified object, or the communication port.
  • an embodiment of the present invention further provides an apparatus for issuing identification information, including at least:
  • the first network unit 301 is configured to issue identification information; the identification information includes at least: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the first visual recognition unit 302 is configured to capture an image, identify the identified object based on the captured image, and obtain identification information.
  • the first network unit 301 is further configured to:
  • the identification information is published in a broadcast, or multicast, or unicast, or in response to a query request.
  • an embodiment of the present invention further provides an apparatus for receiving or transmitting identification information, including at least:
  • the second network unit 401 is configured to receive the identification information from the first device; the identification information includes at least: coordinates of the identified object in the first coordinate system, and first coordinate system information.
  • the second network unit 401 is further configured to: send a query request.
  • the second visual recognition unit 402 is configured to calculate coordinates of the identified object in the second coordinate system based on the identification information.
  • the second visual recognition unit 402 is configured to:
  • the storage module 403 is configured to save one or more of the following information:
  • an embodiment of the present invention further provides an information recognition system, including at least:
  • the first device 501 is configured to issue identification information; the identification information includes at least: coordinates and coordinate description information of the identified object in the first coordinate system;
  • the second device 502 is configured to receive the identification information.
  • the first device 501 is further configured to:
  • the image is captured, and the identified object is identified based on the captured image to obtain identification information.
  • the first device 501 is further configured to: issue the identification information in a manner of broadcasting, or multicasting, or unicasting, or responding to the query request.
  • the second device 502 is further configured to:
  • the coordinates of the identified object in the second coordinate system are calculated based on the identification information.
  • the second device 502 is further configured to:
  • the second device 502 is further configured to: save one or more of the following information:
  • various components, systems, apparatuses, processing steps and/or data structures can be manufactured, manipulated, and/or executed using various types of operating systems, computing platforms, computer programs, and/or general purpose machines. Moreover, those of ordinary skill in the art will appreciate that less versatile devices may also be utilized.
  • the methods included are performed by a computer, device or machine, and the method can be stored as machine readable instructions, which can be stored on a defined medium, such as a computer storage device, including but not limited to ROM (read only memory) Reservoir, FLASH memory, transfer device, etc.) Magnetic storage media (eg, magnetic tape, magnetic disk drives, etc.), optical storage media (eg, CD-ROM, DVD-ROM, paper cards, paper tape, etc.) and other well known types of program memory.
  • ROM read only memory
  • FLASH memory FLASH memory
  • magnetic storage media eg, magnetic tape, magnetic disk drives, etc.
  • optical storage media eg, CD-ROM, DVD-ROM, paper cards, paper tape, etc.
  • FIG. 6 is a schematic diagram of a device scenario of an embodiment of the present invention, illustrating each device and its relationship.
  • gesture and device identification devices A, B
  • the gesture and device identification device A is the second device according to the embodiment of the present invention.
  • the gesture and device identification device B is also the first device according to the embodiment of the present invention.
  • the TV set TV is also the embodiment of the present invention. Said the identified object.
  • Gesture and device recognition device also known as (computer) visual recognition device, implements computer vision and image processing technology.
  • gesture and device recognition are equipped with a camera capable of recognizing gestures, recognition and discovery devices; TV TV Gesture control is supported, but no camera is used to recognize gestures. This is for convenience only.
  • the gesture and device identification device A can be assembled with the TV TV to become a composite function device.
  • the television TV is located within the visual range of the gesture and device identification device B, ie the gesture and equipment recognition device B is able to capture an image of the television TV.
  • Another gesture and the visual range of the device identification device A cannot capture the TV set.
  • the gesture and device recognition devices A and B are located within the visual range of the other party and can capture each other's images.
  • Each device contains a control module that can control sending and receiving messages with other devices, as well as processing control commands or handing over control commands.
  • UPnP Universal Plug and Play
  • mDNS Multicast Domain Name System
  • DNS-SD Domain Name System-based Service Discovery
  • media display devices such as TV
  • servers such as DVD players, home storage servers
  • the gesture recognition control devices A, B include a camera with image and video capture capabilities, and an infrared ranging module. As an embodiment, the gesture recognition control device A or B internal package The image recognition unit, the data storage unit, the control unit, and the network service unit are included.
  • all the gesture recognition control devices can identify the devices in the visual range thereof.
  • the implementation method is that the camera and the ranging module rotate and collect images in the three-dimensional space, and analyze and search for the predefined label graphics in the collected images. The analysis is performed.
  • the predefined tags may be pre-stored in the image and graphics in the visual recognition and discovery control device, or may be received images and graphics transmitted from the discovered and recognized devices.
  • the image recognition unit of the gesture recognition control device analyzes the image, it recognizes that there are multiple identifiers in the image, identifies the identifiers separately and stores the identified information, and then performs distance measurement on the device where the label is located, and stores the identifier information together with the identification information. .
  • the visual recognition and discovery control device receives feature images from devices such as a TV TV, a DVD player, and a home storage server, such as device photo data of a plurality of angles, and then captures device photo data from the device.
  • the panoramic image is retrieved to measure and record the location of the device.
  • This embodiment may use techniques such as feature detection and matching in computer vision technology.
  • the message when the TV TV is powered on, the message is sent in a multicast manner, and the message includes:
  • the network address and unique identifier of the device ie TV TV
  • Supported gesture numbers such as 1 for 5 fingers open, 2 for two fingers, 10 for fists, 20 for shaking arms, etc.
  • control corresponding to the gesture such as 1 means boot, 10 means boot, and so on;
  • the three-dimensional coordinates of the two gesture and device identification devices (A, B) and a television TV are expressed as:
  • gesture and device identification device A A (0, 0, 0)
  • gesture and device identification device B B (b1, b2, b3).
  • gesture and device identification device B B (0, 0, 0)
  • gesture and device identification device A A (a1, a2, a3)
  • TV TV T (t1, t2, t3) .
  • the coordinate point here may be the center point of the device plane, or may be specified as some other point on the device, such as a tangent plane, on the contour.
  • Figure 7 is a schematic diagram of a device unit in accordance with an embodiment of the present invention.
  • the visual recognition device A ie, the second device
  • the visual recognition device B ie, the first device
  • the receiving device T ie, the identified object
  • the device T is composed of a network unit, a visual identifier, and a visual recognition response unit.
  • the function of the network unit includes a network interface, mutual discovery and connection capabilities on the network, and the ability to send and receive messages with other devices.
  • the visual logo is used to visually represent the device, at least the logo attached to the exterior or the photo, picture, trademark logo, video clip representation of the device itself. These photos or graphic images or video clips can be sent as device T to the data in the device B message.
  • the visual recognition response unit is an internal coordination control unit, including receiving a message of the network unit for analysis, instructing the visual response unit to respond according to the message request, adding visual identification data, visual feature description data, and requesting according to the message in the response message.
  • the network element is instructed to respond to the request message; here the visual feature description data may be a visual identification related signal characteristic including a regular description.
  • the network message may adopt the UPnP protocol, and fill in the data in the specified field and the extended field according to the protocol.
  • the visual recognition devices A and B that is, the visual gesture and device identification device, are composed of a visual data acquisition unit, a storage unit, a network unit, and a visual recognition unit.
  • the function of the network unit includes a network interface, which supports mutual discovery and connection on the network, and can send and receive messages with other devices.
  • the visual data acquisition unit of the visual recognition device A, B and the visual identification unit of the device T are corresponding units, that is, the visual identification unit of T provides visual features such as images, emits visual/optical signals, and devices A and B.
  • the vision unit is responsible for identifying the visual features provided by the visual identification unit and matching the recognition according to the predefined features to determine the recognition result.
  • the visual recognition unit is an internal computing unit that performs visually related calculations, including image recognition, spatial coordinate determination, and object space position coordinate calculations, and the like.
  • the storage unit stores the identified device information, including device identification such as device T, network address, spatial location, visual feature data, etc., wherein the spatial location may be device T relative to device B.
  • the geometric parameters of the origin of the space coordinate such as three-dimensional space coordinates, or vector coordinates.
  • the storage unit may provide the location data of the device T for use by the device A.
  • the device B issues identification information including at least coordinate system information and coordinate parameters of the identified object such as the device T.
  • the coordinate parameter of the identified object may be the coordinate in the coordinate system described by the identified object in the coordinate system information.
  • the device B identifies the identified object T by capturing an image, and determines a coordinate system; where the coordinate system can be a point in the image.
  • the identified information about the device T may also include an object contour (such as three points, four points, five points, or other multi-point coordinates), or distance information (such as the identified device to the center point coordinates or to the coordinate origin) The distance) is the coordinates of the device B itself.
  • Other information includes the identification object, that is, the identifier of the device T, or the network address, or the communication port.
  • the identified object is the device T, and the identification information includes its coordinates.
  • Device B issues identification information, which can be broadcast, multicast, unicast, and query-response.
  • the information of the device T cannot be captured because of the location, but the identification information issued by the device T can be received from the device B.
  • the received information includes the coordinate system information and the coordinate parameter of the identified object T, the identifier of the identified object, Or network address, or communication port.
  • the gesture and device recognition apparatus A cannot capture the TV, it can determine the spatial relationship between the gesture and the TA by combining the known TA based on the captured gesture position information.
  • the device A can continue (computer vision) recognition, communicate with the identified object T according to the recognized gesture result, the coordinates of the identified object T, the network address or the communication port, ie, send to the identified object T Gesture recognition result information, or control commands, and the like.
  • Figure 8 is a flow chart of a method in accordance with an embodiment of the present invention.
  • the flowcharts described in this embodiment refer to the devices A, B in the embodiment of Fig. 6, and A and B in the embodiment of Fig. 7.
  • This embodiment describes a flow chart of the operation of the apparatus.
  • the visual recognition apparatus 1, 2 can refer to A, B in Figs.
  • Step 800 the visual recognition device 1 collects data, identifies the device and its location
  • the device 1 captures the image in the visual range through the camera, and identifies the device in the image, and finally obtains information such as the device position and network parameters.
  • the collection, identification, calculation and detailed information of the information can refer to all the previous embodiments.
  • Step 801 The visual recognition device issues a data update notification.
  • the device 1 issues an update notification about the identified device by means of broadcast, or multicast, or unicast, or in response to a query request.
  • Step 802 the visual recognition device 2 issues a data query notification
  • step 801 device 2 actively issues a query request. This step has no relationship with step 801, and only one of them can be used.
  • Step 803 The visual recognition device 1 responds to the visual recognition device 2, and the response message includes the identified device and its location information.
  • the visual recognition device 1 transmits the related information of the identified device to the visual recognition device 2, including: a coordinate system, a coordinate parameter of the identified object such as the device T, and a coordinate of the device 2 in the coordinate system, and an object contour of the recognized object (such as three points, four points, five points or other multi-point coordinates), or distance information (such as the distance between the identified device to the center point coordinates or the origin of the coordinates), other information includes the identification object, that is, the identification of the device T, Or network address, or communication port.
  • Step 804 the visual recognition device 2 interacts with the identified device
  • the identified device T is not within the vision of the visual recognition device 2, there is a visual recognition device
  • the information provided by the device 1 is such that when the user makes a gesture within the visual range of the visual recognition device 2, the visual recognition device 2 can determine the device that the user gesture wants to manipulate, and can send the recognized gesture information or command to The device T is identified. At this time, the user's gesture does not affect the user's operation even if it is not within the visual range of the visual recognition device 1. It is even possible to install the visual recognition device 1 exclusively for recognizing a target device, and the visual recognition device 2 is used for recognizing a gesture and controlling a target device.
  • FIG. 9 is a schematic diagram of message interaction according to an embodiment of the present invention.
  • the visual recognition device 2 receives a message related to the target recognized device issued by the visual recognition device 1.
  • the processing flow for this function is:
  • Step 900 the visual recognition device 1 identifies the target device
  • the identification of the target device by the visual recognition device includes capturing and recognizing the spatial position information of the identified device such as coordinate data and equipment by capturing the image or video segment of the identified device by using a depth camera or a plurality of cameras.
  • the distance data between the two also includes information such as the network address of the identified device, the communication port, the identified device (unique) identifier, and the service capability through the network message.
  • the visual recognition device 1 can also perform the following behavior (i.e., the ACTION step shown in Fig. 9): saving the identified information, associating the identification information with each device, and the like.
  • Step 901 The visual recognition device 1 issues device data of the identified device.
  • the implementation of this message is illustrated in the embodiment illustrated in Figures 1, 2, 6, and 8.
  • the distribution here also uses network messages, including in broadcast, or multicast, or unicast, where unicast can be in a subscription-publishing manner, for example, the visual recognition device 1 accepts subscription device data of the visual recognition device 2 in advance.
  • the visual recognition device 2 of the update request sends a message reporting the device data of the identified device.
  • Step 902 The visual recognition device 2 sends a query device data request to the visual identification device 1;
  • the visual recognition device 2 may also use a broadcast, or multicast, or unicast method to send a query request.
  • the figure illustrates the scenario of a peer-to-peer unicast query.
  • the identification function re-identifies and calculates the identified device, and can also send the already recognized device information that has been saved.
  • Step 903 the visual recognition device 1 transmits the device data to the visual recognition device 2 as a response to the third step message;
  • This step is also performed by a network message, which may be any of the information indicated in step 900 or a combination of information therein, or all of the information.
  • Step 904 The visual recognition device 2 sends a device control command to the identified device.
  • the visual recognition device 2 can determine the device that the user gesture wants to manipulate, and then transmit the recognized gesture information or command to the identified device T.
  • the visual recognition device 2 can determine the device that the user gesture wants to control, except that the user gesture is collected and analyzed and recognized, and because the visual recognition device 2 receives the spatial position information of the identified device, thereby being able to determine the user gesture and The positional relationship between the identified devices, and the intention of the user to control.
  • Step 905 The identified device sends a device execution result response to the visual recognition device 2.
  • This step is an optional step, and when the target device completes the instruction of the device 2, a response message can be sent.
  • the response message may include a successful control, a failed control, an unrecognized manipulation command, an unsupported manipulation mode, a manipulation result data, and the like.
  • steps 900-903 and 904-905 can be run separately as two separate programs. For example, after the device is recognized, if the user does not perform gesture control or the visual recognition device does not collect the gesture image or video clip, it is equivalent to only Steps 900-903 occur and steps 904-905 do not occur.
  • the television, the player, and the storage server are used as the controlled device, and the embodiment of the present invention is not limited to the device to be controlled, such as the device mentioned in the embodiment, the computer, the audio, the speaker, and the projector. , set-top boxes, etc. can be used as controlled devices, and even other industrial equipment such as automobiles, machine tools, ships, etc. can be manipulated by visual recognition and discovery control devices.
  • the camera of the visual recognition and discovery control device may be of various specifications, for example, may be a fixed focal length or a zoom distance, and the rotating space may be a plurality of angles up, down, left, and right, or only support left and right angles, and only need to be configured.
  • a camera has the various capabilities described in the embodiments.
  • the distance measuring unit can use laser infrared ranging, or can use other bands of light measurement, you can use 3
  • the camera calculates the ranging, and more cameras can be used to calculate the ranging using weighting adjustments.
  • the foregoing process is only an embodiment of the present invention, and is not limited to the embodiment, and the method for executing the process is not limited.
  • the embodiment of the present invention may also be implemented in a similar manner, such as indicating the name of the unit and the type of the message. Wait, it's just that the naming form, the message content, etc. are different.
  • each device's operating system can be applied to UNIX-like operating systems, WINDOWS-like operating systems, ANDROID-like operating systems, IOS operating systems, and consumer interfaces can be applied to JAVA language interfaces.
  • the first device issues the identification information of the identified object, and shares the identification information of the identified object, so that the other device can identify the identified object by receiving the issued identification information, regardless of whether Identify if the object is within its capture range.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant d'émettre et de recevoir des informations de reconnaissance, et un système de reconnaissance d'informations. Le procédé comprend les étapes suivantes : émettre, par un premier dispositif, des informations de reconnaissance, où les informations de reconnaissance comprennent au moins : une coordonnée d'un objet devant être reconnu dans un premier système de coordonnées et des informations concernant le premier système de coordonnées.
PCT/CN2015/093896 2014-11-07 2015-11-05 Procédé et appareil permettant d'émettre et de transmettre des informations de reconnaissance, et système de reconnaissance d'informations WO2016070827A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410623925.1 2014-11-07
CN201410623925.1A CN105630142A (zh) 2014-11-07 2014-11-07 一种发布和传递识别信息的方法和装置及信息识别系统

Publications (1)

Publication Number Publication Date
WO2016070827A1 true WO2016070827A1 (fr) 2016-05-12

Family

ID=55908606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/093896 WO2016070827A1 (fr) 2014-11-07 2015-11-05 Procédé et appareil permettant d'émettre et de transmettre des informations de reconnaissance, et système de reconnaissance d'informations

Country Status (2)

Country Link
CN (1) CN105630142A (fr)
WO (1) WO2016070827A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282411A (zh) * 2008-03-14 2008-10-08 青岛海信电器股份有限公司 控制装置、包括所述控制装置的视频装置及控制方法
CN101344816B (zh) * 2008-08-15 2010-08-11 华南理工大学 基于视线跟踪和手势识别的人机交互方法及装置
CN102012778A (zh) * 2009-09-04 2011-04-13 索尼公司 显示控制设备、显示控制方法以及显示控制程序
CN103869967A (zh) * 2012-12-14 2014-06-18 歌乐株式会社 控制装置、车辆以及便携终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102452611B (zh) * 2010-10-21 2014-01-15 上海振华重工(集团)股份有限公司 集装箱起重机的吊具空间姿态的检测方法和装置
CN102984592B (zh) * 2012-12-05 2018-10-19 中兴通讯股份有限公司 一种数字媒体内容播放转移的方法、装置和系统
CN103072528A (zh) * 2013-01-30 2013-05-01 深圳市汉华安道科技有限责任公司 一种车辆及其全景泊车方法、系统
CN104102335B (zh) * 2013-04-15 2018-10-02 中兴通讯股份有限公司 一种手势控制方法、装置和系统
CN105589550A (zh) * 2014-10-21 2016-05-18 中兴通讯股份有限公司 信息发布方法、信息接收方法、装置及信息共享系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282411A (zh) * 2008-03-14 2008-10-08 青岛海信电器股份有限公司 控制装置、包括所述控制装置的视频装置及控制方法
CN101344816B (zh) * 2008-08-15 2010-08-11 华南理工大学 基于视线跟踪和手势识别的人机交互方法及装置
CN102012778A (zh) * 2009-09-04 2011-04-13 索尼公司 显示控制设备、显示控制方法以及显示控制程序
CN103869967A (zh) * 2012-12-14 2014-06-18 歌乐株式会社 控制装置、车辆以及便携终端

Also Published As

Publication number Publication date
CN105630142A (zh) 2016-06-01

Similar Documents

Publication Publication Date Title
US10013067B2 (en) Gesture control method, apparatus and system
JP6374052B2 (ja) デジタルメディアコンテンツ再生を転送する方法、装置およびシステム
WO2015127786A1 (fr) Procédé, dispositif et système de reconnaissance d'un geste de la main et support de stockage informatique
US10591999B2 (en) Hand gesture recognition method, device, system, and computer storage medium
TWI743853B (zh) 設備控制方法、電子設備及儲存介質
WO2015137740A1 (fr) Système de réseau domestique utilisant un robot et son procédé de commande
US11743590B2 (en) Communication terminal, image communication system, and method for displaying image
WO2016062191A1 (fr) Procédé de publication d'informations, procédé et appareil de réception d'informations, et système de partage d'informations
EP2939102B1 (fr) Procédé de rendu de données dans un réseau et dispositif mobile associé
KR101708578B1 (ko) 이전되어 재생되는 디지털 미디어 콘텐츠를 복귀시키는 방법, 장치 및 시스템
JP6556703B2 (ja) 視覚識別を実現する方法、装置及びシステム
WO2016070827A1 (fr) Procédé et appareil permettant d'émettre et de transmettre des informations de reconnaissance, et système de reconnaissance d'informations
CN116208735A (zh) 多摄像头控制方法、装置、系统、存储介质以及电子设备
CN115243085A (zh) 显示设备及设备互联方法
CN114760354A (zh) 一种音视频分享方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15856499

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15856499

Country of ref document: EP

Kind code of ref document: A1