WO2021046747A1 - 主动式物体识别方法、物体识别装置以及物体识别系统 - Google Patents

主动式物体识别方法、物体识别装置以及物体识别系统 Download PDF

Info

Publication number
WO2021046747A1
WO2021046747A1 PCT/CN2019/105349 CN2019105349W WO2021046747A1 WO 2021046747 A1 WO2021046747 A1 WO 2021046747A1 CN 2019105349 W CN2019105349 W CN 2019105349W WO 2021046747 A1 WO2021046747 A1 WO 2021046747A1
Authority
WO
WIPO (PCT)
Prior art keywords
detected
information
virtual image
image
touch screen
Prior art date
Application number
PCT/CN2019/105349
Other languages
English (en)
French (fr)
Inventor
黄彦钊
陈永新
曾宏
Original Assignee
深圳盈天下视觉科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳盈天下视觉科技有限公司 filed Critical 深圳盈天下视觉科技有限公司
Priority to CN201980002036.4A priority Critical patent/CN110799987B/zh
Priority to PCT/CN2019/105349 priority patent/WO2021046747A1/zh
Publication of WO2021046747A1 publication Critical patent/WO2021046747A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Definitions

  • This application relates to the technical field of article recognition, and in particular to a touch screen-based active object recognition method, a touch screen-based active object recognition device, and an active object recognition system.
  • touch screen object recognition has been widely used in entertainment venues, exhibition halls and product displays; because object recognition can be applied to different objects, and bring users a more realistic user experience, when users will
  • the identification device is placed on the touch screen, the object information required by the user can be directly obtained through the display device and manipulated.
  • This not only simplifies the use and display cost of the object, but also enhances the scientific and technological experience of the object and creates A real use environment; however, the related technology only performs passive recognition of objects.
  • the number of recognized objects is limited, the scalability is not high, and the recognition error of objects is large, which is prone to large recognition control errors, which reduces Human-computer interaction performance.
  • One of the objectives of the embodiments of this application is to provide an active object recognition method based on a touch screen, an active object recognition device based on a touch screen, and an active object recognition system, aiming to solve the problem of passive recognition of objects in related technologies.
  • the error of object recognition is large, compatibility and scalability are not high.
  • an active object recognition method based on a touch screen including:
  • the virtual image information of the target detection object is controlled according to the image processing signal
  • the target detection object includes the object to be detected corresponding to the successfully matched identification code.
  • an active object recognition device based on a touch screen including:
  • An identification module configured to output at least one identification code corresponding to at least one object to be detected in a one-to-one correspondence when a trigger event of at least one object to be detected is detected;
  • An image detection module for acquiring position information and angle information of the object to be detected on the touch screen to generate virtual image information corresponding to the object to be detected;
  • the image recognition module is used to receive an image processing signal and generate an image recognition code matching the image processing signal when an image processing event is detected;
  • An image matching module configured to sequentially match the image identification code with at least one of the identification codes
  • the image control module is configured to control the virtual image information of the target detection object according to the image processing signal when the image identification code is successfully matched with one of the identification codes;
  • the target detection object includes the object to be detected corresponding to the successfully matched identification code.
  • an active object recognition system including:
  • the active object recognition device is used to control the virtual image information of the object to be detected according to the image processing signal when an image processing event and a trigger event of at least one object to be detected are detected.
  • FIG. 1 is a specific implementation flowchart of an active object recognition method based on a touch screen provided by an embodiment of the present application
  • FIG. 2 is a specific implementation flowchart of the active object recognition method S105 based on the touch screen in FIG. 1 according to an embodiment of the present application;
  • FIG. 3 is another specific implementation flowchart of a touch screen-based active object recognition method provided by an embodiment of the present application
  • FIG. 4 is another specific implementation flowchart of a touch screen-based active object recognition method provided by an embodiment of the present application
  • FIG. 5 is a specific implementation flowchart of the active object recognition method S403 based on the touch screen in FIG. 4 according to an embodiment of the present application;
  • FIG. 6 is a schematic diagram of the relative positional relationship between a closed annular area and virtual image information provided by an embodiment of the present application
  • FIG. 7 is another specific implementation flowchart of a touch screen-based active object recognition method provided by an embodiment of the present application.
  • FIG. 8 is a specific implementation flowchart of the active object recognition method S102 based on the touch screen in FIG. 1 according to an embodiment of the present application;
  • FIG. 9 is another specific implementation flowchart of a touch screen-based active object recognition method provided by an embodiment of the present application.
  • FIG. 10 is another specific implementation flowchart of a touch screen-based active object recognition method provided by an embodiment of the present application.
  • FIG. 11 is another specific implementation flowchart of the active object recognition method S102 based on the touch screen in FIG. 1 according to an embodiment of the present application;
  • FIG. 12 is another specific implementation flowchart of the active object recognition method S102 based on the touch screen in FIG. 1 according to an embodiment of the present application;
  • FIG. 13 is another specific implementation flowchart of the touch screen-based active object recognition method S105 in FIG. 1 according to an embodiment of the present application;
  • FIG. 14 is a schematic structural diagram of an active object recognition device based on a touch screen provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an active object recognition system provided by an embodiment of the present application.
  • Figure 1 shows the specific implementation process of the touch screen-based active object recognition method provided by this embodiment, in which the active object recognition method can identify and detect at least one object to be detected; it should be noted that in this article "Objects to be detected” include any type of objects, such as mobile phones, toys, cosmetics, jewelry, drinks, etc.
  • the above-mentioned active object recognition method includes the following steps:
  • the touch screen has a trigger detection function.
  • the recognition process of the object to be detected needs to be started, which improves the accuracy and efficiency of the object recognition control;
  • the identification code is used as the identifier of the object to be detected, and multiple objects to be detected are set in one-to-one correspondence with multiple identification codes, so that the object to be detected can be accurately identified according to the identification code, which is helpful for multiple objects to be detected.
  • the precise identification and control of objects guarantees the scope of application of the active object identification method.
  • S102 Acquire position information and angle information of the object to be detected on the touch screen to generate virtual image information corresponding to the object to be detected.
  • the actual position and state of the object to be detected on the touch screen can be sensed, so that the object to be detected can be controlled more accurately and quickly; and the corresponding virtual information can be obtained according to the actual state information of the object to be detected.
  • the image information can then more accurately perform virtual operations on the object to be detected in the virtual environment to simulate the operation control steps in the real environment, and the human-computer interaction experience is better.
  • the image processing signal contains image processing information, and the image pixels can be operated accurately and synchronously through the image processing information.
  • the image processing signal represents the user's image operation requirements; wherein the image processing signal and the image identification code have a one-to-one correspondence, and the image identification code can identify a specific object to be detected from at least one object to be detected, to Realize the precise and flexible identification control function for the object to be detected.
  • S104 Match the image identification code with at least one identification code in sequence.
  • the image identification code and the identification code have a data corresponding function, and then the image identification code and the identification code are compared to determine whether the object to be detected that matches the identification code is the object to be detected to be controlled, achieving accuracy
  • the recognition function avoids recognition and control errors.
  • the target detection object includes the object to be detected corresponding to the successfully matched identification code.
  • the three objects to be detected are: object A to be detected, object B to be detected, and object C to be detected.
  • the image identification code and the object to be detected are If the identification code of the detected object A is not matched successfully, then continue to match and identify the object B to be detected; when the image identification code matches the identification code of the object B to be detected successfully, it means that the object B to be detected is the target detection object.
  • Circuit control is performed on the virtual image information of the object B to be detected according to the image processing signal to meet the virtual operation control requirements. In the virtual simulation environment, the circuit control experience in the real environment can be fully simulated, and the human-computer interaction performance is better.
  • Figure 1 shows the specific implementation process of the active object recognition method.
  • Each object to be detected is separately coded to obtain the corresponding identification code.
  • the signal is processed through image processing.
  • Fig. 2 shows that the present embodiment provides in S105 of Fig. 1, controlling the virtual image information of the target detection object according to the image processing signal, which specifically includes:
  • S1051 Analyze the image processing signal to obtain movement control instructions, audio control instructions, and text control instructions.
  • the image processing signal includes a full range of control information. After analyzing the image processing signal, the control information can be read more accurately to achieve fast and precise control of the object to meet the user's multi-directional circuit control needs. , Active object recognition method has higher practical value.
  • S1052 Control the virtual image information of the target detection object to move in a preset direction according to the movement control instruction; or play the preset audio content according to the audio control instruction; or display the preset text content according to the text control instruction.
  • the virtual image information of the target detection object is controlled to move according to a preset direction and a preset speed.
  • the virtual image information can be driven to adaptively move through the movement control instructions to change the actual position of the virtual image information to meet the viewing needs of users; among them, the audio drive function can be realized through the audio control instructions, and when the virtual image of the target detection object is obtained In the case of information, the sound function is realized according to the image processing signal; for example, the audio content includes the relevant shape information of the virtual image information and the like.
  • the text control command can realize the text display function, so that the user can watch the data information of the relevant target detection object in real time.
  • the position of the virtual image information can be moved by the image processing signal, or sound and text information related to the virtual image information can be emitted, which satisfies the user's various sensory experiences such as vision and hearing, and has a wide range of applications.
  • FIG. 3 shows another specific implementation step of the active object recognition method provided in this embodiment.
  • S301 to S306 in FIG. 3 are shown in FIG. 1 Corresponds to S101 ⁇ S104 and S1051 ⁇ S1052 in Fig. 2, so the specific implementation of S301 ⁇ S306 in Fig. 3 will not be repeated here; the following will focus on S307; among them, control the target detection object according to the movement control instruction
  • the active object recognition method in this embodiment further includes:
  • S307 Acquire and display the coordinates of the virtual image information of the target detection object in the first preset coordinate system.
  • the coordinate change value of the virtual image information of the target detection object within a preset time period is recorded, and the virtual image information of the target detection object is drawn in advance.
  • Set the trajectory within a time period, and the position history change of the virtual image information of the target detection object can be monitored in real time through the electronic map.
  • the first preset coordinate system is used as the position reference, and the actual position of the virtual image information of the target detection object under the first preset coordinates is obtained.
  • the real-time tracking and control function of the moving position of the virtual image information greatly guarantees the safety and efficiency of the identification and control of the virtual image information of the target detection object; and the position of the virtual image information is displayed.
  • a plurality of identification codes are generated according to the time sequence when the multiple objects to be detected trigger the touch screen;
  • the identification codes correspond one to one.
  • this embodiment can detect the trigger state of the object to be detected in real time on the touch screen, and the identification code of each object to be detected is related to the trigger time of the object to be detected, which guarantees the identity of each object to be detected.
  • the accuracy and efficiency of the coding are used to realize the precise identification and control functions of the object to be detected.
  • the identity coding of the object to be detected in this embodiment has nothing to do with the area of the touch screen.
  • the trigger event of the object to be detected can be received through the touch screen.
  • the active object recognition method in this embodiment can identify and control multiple objects to be detected, which has high practical value .
  • FIG. 4 shows another implementation process of the active object recognition method provided by this embodiment, in which S401 to S402 and S404 to S406 in FIG. 4 are the same as S101 to S101 in FIG. S05 corresponds to S05. Therefore, for the specific implementations of S401 to S402 and S404 to S406 in FIG. 4, please refer to the embodiment of FIG. 1, which will not be repeated here. The following will focus on S403.
  • active object recognition Methods also include:
  • S403 Generate a light source prompt signal matching the virtual image information.
  • this embodiment can display the corresponding state prompt information through the light source prompt signal to highlight the human-computer interaction performance presented by the virtual image information, so that the user can perform the corresponding virtual image control function, and improve the active object recognition method Humanized control function.
  • FIG. 5 shows the specific implementation process of S403 in FIG. 4 provided by this embodiment. Please refer to FIG. 5.
  • S403 specifically includes:
  • S4031 Acquire a closed ring area corresponding to the virtual image information, where the virtual image information is located inside the closed ring area.
  • the closed loop area surrounds the virtual image information, so as to determine the position boundary of the virtual image information, and manipulate the virtual image information, which improves the control accuracy and efficiency of the virtual image information.
  • S4032 Send a light source reminder signal around the outside of the enclosed annular area.
  • a light source prompt signal with preset light intensity and preset brightness is emitted around the outside of the closed annular area.
  • FIG. 6 shows a schematic diagram of the relative positional relationship between the closed annular area 602 and the virtual image information 601 provided in this embodiment. Since the virtual image information 601 is located outside the closed annular area 602, it surrounds the virtual image information 601. Send out the corresponding light source prompt information to meet the user's visual needs.
  • FIG. 7 shows another implementation process of the active object recognition method provided by this embodiment.
  • S701 to S702 and S705 to 707 in FIG. 7 are the same as S101 to S105 in FIG.
  • S701-S702 and S705-707 in FIG. 7 can refer to the embodiment of FIG. 1, which will not be repeated here; the following focuses on S703 and S704; after S702, the active object recognition method also includes
  • S703 Generate at least two key selection items associated with the object to be detected.
  • Each of the key options includes a specific key control function.
  • the key selection items can more comprehensively explain and explain the functions and characteristics of the virtual image information for the user Bringing a better user experience, the active object recognition method in this embodiment has higher virtual image information control accuracy and control efficiency.
  • the media information contained in the key selection item is not displayed.
  • the media information includes image information and audio information.
  • the key selection item is triggered, the corresponding media playback function needs to be activated; the media information is associated with the virtual image information of the object to be detected, and the media information can be It helps to grasp the real state of virtual image information in real time and accurately, and facilitates the stable and reliable identification and control function of virtual image information.
  • FIG. 8 shows that in S102 in FIG. 1, the position information and angle information of the object to be detected on the touch screen are acquired, which specifically includes:
  • S801 Divide the detection area of the touch screen into 3 target recognition ranges, where any two target recognition ranges do not overlap.
  • the three target recognition ranges are evenly distributed in the detection area of the touch screen, so as to achieve a precise control response function for the object to be detected.
  • the detection area of the touch screen can accurately sense the trigger event of the object to be detected to start the identification control process for the object to be detected; therefore, this embodiment combines three target recognition ranges to trigger detection of the object to be detected, which guarantees The trigger detection accuracy of multiple objects to be detected improves the efficiency of trigger detection for the objects to be detected, and has high practical value.
  • this embodiment divides the entire detection area of the touch screen into 3 target recognition ranges, the trigger event of the object to be detected in the corresponding area can be detected in each target recognition range, and the corresponding object to be detected can be accurately obtained. Status information to achieve rapid identification and control of multiple objects to be detected.
  • FIG. 9 shows another implementation process of the active object recognition method provided in this embodiment, where S901 to S905 in FIG. 9 correspond to S101 to S105 in FIG. 1, Therefore, for the specific implementation of S901 to S905 in FIG. 9, please refer to the embodiment in FIG. 1, which will not be repeated here.
  • the following will focus on S906 to S908, in which, after detecting that at least one object to be detected emits a trigger event, Active object recognition methods also include:
  • S906 Obtain shape and contour information of the object to be detected.
  • the vertical projection of the object to be detected on the horizontal plane is obtained to obtain the projection profile of the object to be detected, and then the property information of the object to be detected is grasped in real time, so as to facilitate real-time operation and recognition of the object to be detected, which improves the accuracy of recognition.
  • S907 Match the shape contour information with a plurality of first contour information pre-stored in the article recognition database in sequence.
  • the article recognition database is pre-stored in multiple first contour information and multiple shape types, and there is a one-to-one correspondence between the multiple first contour information and the multiple shape types, and the article recognition database can be performed according to the shape contour information.
  • Shape recognition in order to complete the classification of the object to be detected, the accuracy and speed of shape recognition is high.
  • the shape contour information is the same as the pre-stored contour information, it means that the shape recognition of the object to be detected in the item recognition database is successful, and the shape type corresponding to the first contour information that matches successfully is output.
  • the shape types include: rectangle, rhombus, ellipse And so on, after the shape recognition of the object to be detected, the actual attributes and functions of the object to be detected can be obtained more comprehensively, and the omnidirectional monitoring function of the object to be detected can be realized.
  • FIG. 10 shows another specific implementation process of the active object recognition method provided in this embodiment.
  • S1001 to S1005 in FIG. 10 correspond to S101 to S105 in FIG. 1,
  • S1001 to S1005 in FIG. 10 please refer to the embodiment in FIG. 1.
  • the following will focus on S1006 and S1007, where the active object recognition method in this embodiment also includes:
  • the vertical distance between the object to be detected and the detection area of the touch screen is within a preset distance, it indicates whether the object to be detected is in a floating state, and the object to be detected does not trigger the touch screen at this time;
  • the area sends out the corresponding prompt information, so as to realize the high-efficiency recognition control function for the object to be detected.
  • FIG. 11 shows another specific implementation process of S102 in FIG. 1 provided by this embodiment. Please refer to FIG. 11.
  • S102 specifically includes:
  • S1021 Obtain at least one trigger position of the object to be detected on the touch screen, and generate coordinates of the at least one trigger position in a second preset coordinate system to obtain position information.
  • At least one trigger position is generated.
  • the actual coordinates of the at least one trigger position in the reference coordinate system are recorded to obtain accurate position information.
  • S1022 Obtain two trigger positions of the object to be detected on the touch screen, and obtain the angle between the straight line formed by the two trigger positions and the pre-reference line to obtain angle information.
  • the pre-reference line extends along a preset orientation; in this embodiment, the angle between the straight line formed by the two trigger positions and the pre-reference line is less than or equal to 90 degrees.
  • S1023 Generate virtual image information of the object to be detected according to the position information and the angle information.
  • FIG. 12 shows that in S102 of FIG. 1 provided by this embodiment, the virtual image information of the object to be detected is generated according to the position information and the angle information, which specifically includes:
  • S1201 Construct an area to be touched according to the position information and the angle information.
  • S1202 Obtain a vertical projection plane image of the object to be detected in the area to be touched.
  • the vertical projection of the plane image can accurately obtain the projection information in the area to be touched, which helps to generate more accurate virtual image information.
  • S1203 Extract multiple image feature points of the object to be detected according to the position information and the angle information.
  • the image feature points include various image information of the object to be detected, so as to realize the precise monitoring function of each part of the object to be detected.
  • S1204 Perform image construction on the vertical projection plane image in the three-dimensional space based on the image feature points to obtain a three-dimensional virtual image of the object to be detected, so as to generate virtual image information of the object to be detected.
  • the vertical projection plane image is located in the touch area to facilitate the image restoration operation of the plane image, and then the vertical projection plane image is extended to the image in the three-dimensional space, so as to improve the authenticity of the virtual image information and the sense of human-machine experience.
  • the object recognition method can recognize and manipulate three-dimensional virtual images.
  • FIG. 13 shows that in S105 of FIG. 1 provided by this embodiment, the control of the virtual image information of the target detection object according to the image processing signal specifically includes:
  • the image identification code and the identification code of the target detection object are successfully matched, it is determined whether the target detection object is locked. Once the virtual image information of the target detection object is locked, the virtual image information cannot be operated. At this time, the virtual image information of the target detection object The image information is in an unchangeable state. Only after waiting for the target detection object to be unlocked, the circuit control function for the virtual image information of the target detection object can be realized, which ensures the control accuracy and safety of the virtual image information.
  • FIG. 14 shows a schematic structural diagram of an active object recognition device 140 based on a touch screen provided by this embodiment.
  • the active object recognition device 140 includes: a recognition module 1401, an image detection module 1402, an image recognition module 1403, An image matching module 1404 and an image control module 1405.
  • the identification module 1401 is configured to output at least one identification code corresponding to at least one object to be detected one-to-one when a trigger event is detected for at least one object to be detected.
  • the image detection module 1402 is used to obtain position information and angle information of the object to be detected on the touch screen to generate virtual image information corresponding to the object to be detected.
  • the image recognition module 1403 is used for receiving an image processing signal and generating an image recognition code matching the image processing signal when an image processing event is detected.
  • the image matching module 1404 is configured to sequentially match the image identification code with at least one identification code.
  • the image control module 1405 is used to control the virtual image information of the target detection object according to the image processing signal when the image identification code is successfully matched with one of the identification codes.
  • the target detection object includes the object to be detected corresponding to the successfully matched identification code.
  • the active object recognition device 140 further includes: a wireless transmission module 1406, and the wireless transmission module 1406 is configured to wirelessly transmit the image identification code to the image matching module 1404.
  • the image matching module 1404 and the image recognition module 1403 in this embodiment can perform wireless transmission, which ensures the internal communication compatibility of the active object recognition device 140.
  • the active object recognition device 140 further includes: a power module 1407 is connected to the image detection module 1402 and the image recognition module 1403, and the power module 1407 is used to connect the image detection module 1402 and the image
  • the recognition module 1403 provides power supply; the internal power supply safety of the active object recognition device 140 is guaranteed.
  • the image detection module 1402 includes: a gyroscope; exemplary, the model of the gyroscope is: MPU6050.
  • the wireless transmission module 1406 includes: nFF24L01 wireless transmission chip.
  • the identification module 1401 includes three identification units, wherein the three identification units are respectively corresponding to the three detection areas of the touch screen, and any two detection areas do not overlap.
  • Each identification unit is used to detect a trigger event emitted by at least one object to be detected in a corresponding detection area, and output at least one identification code corresponding to the at least one object to be detected one-to-one.
  • the provision of three recognition units in this embodiment can accurately detect the trigger state of the object to be detected on the touch screen, and improve the accuracy and efficiency of the recognition control of the object to be detected.
  • the active object recognition device 140 further includes a display module 1408, wherein the display module 1408 is connected to the image control module 1405, and the display module 1408 is used to detect the virtual image information of the object when the target is detected. When it is controlled, a status prompt message is generated.
  • the display module 1408 is a display screen, and when circuit control is performed on the virtual image information of the target detection object according to the image processing signal, the display screen sends out status prompt information to indicate the actual working status of the object to be detected, which improves the initiative The practical value of the type object recognition device 140.
  • the active object recognition device 140 based on the touch screen in FIG. 14 corresponds to the active object recognition method based on the touch screen in FIG. 1 to FIG. 13. Therefore, the specific implementation of the active object recognition device 140 in FIG. For the manner, refer to the embodiments in FIG. 1 to FIG. 13 and will not be repeated here.
  • the active object recognition system 150 includes: a touch screen 1501 and the above-mentioned active object recognition device 140, wherein the active object The identification device 140 is connected to the touch screen 1501.
  • the active object recognition device 140 is used to control the virtual image information of the object to be detected according to the image processing signal when an image processing event and a trigger event of at least one object to be detected are detected;
  • an object triggers the touch screen 1501 the object to be detected is identified and the virtual image is manipulated according to the user's actual circuit functional requirements.
  • the specific implementation of the active object recognition system 150 in FIG. 15 can refer to the embodiments in FIG. 1 to FIG. 14, which will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于触摸屏的主动式物体识别方法、基于触摸屏的主动式物体识别装置及主动式物体识别系统,主动式物体识别方法包括:当检测到至少一个待检测物体发出触发事件时,则输出与待检测物体对应的身份识别码(S101, S301, S401, S701, S901, S1001);当检测到发生图像处理事件时,生成与图像处理信号匹配的图像识别码(S103, S303, S404, S705, S903, S1003),根据图像识别码与至少一个身份识别码之间的匹配结果,找到目标检测物体的虚拟图像信息,根据图像处理信号对目标检测物体的虚拟图像信息进行控制(S105, S406, S707, S905, S1005),以达到物体的识别和控制功能。对于多个待检测物体进行精确的识别,并且按照图像处理需求对于特定的待检测物体进行多功能控制,提高了对于多个虚拟图像信息的操控简便性,人机交互性能更佳。

Description

主动式物体识别方法、物体识别装置以及物体识别系统 技术领域
本申请涉及物品识别技术领域,具体涉及一种基于触摸屏的主动式物体识别方法、基于触摸屏的主动式物体识别装置及主动式物体识别系统。
背景技术
随着人们生活水平不断提高,触摸屏物体识别已经广泛地应用于娱乐场所、展厅和产品展示之中;由于物体识别能够适用于不同的物体,并且给用户带来更加真实的用户体验,当用户将识别装置放置在触摸屏上时,则可直接通过显示设备能够直接获取用户所需的物体信息,并且进行操控,这不但简化了对于物体的使用和显示成本,而且能够提升物体的科技体验感,营造一种真实的使用环境;然而相关技术仅仅对于物体进行被动式识别,其识别的物体数量有限,可扩展性不高,并且对于物体的识别误差较大,容易产生较大的识别控制误差,降低了人机交互性能。
发明概述
技术问题
本申请实施例的目的之一在于:提供一种基于触摸屏的主动式物体识别方法、基于触摸屏的主动式物体识别装置及主动式物体识别系统,旨在解决相关技术对于物体进行被动式识别过程中,物体识别的误差较大,兼容性和可扩展性不高的问题。
问题的解决方案
技术解决方案
为解决上述技术问题,本申请实施例采用的技术方案是:
第一方面,提供了一种基于触摸屏的主动式物体识别方法,包括:
当检测到至少一个待检测物体发生触发事件时,输出与至少一个所述待检测物体一一对应的至少一个身份识别码;
获取所述待检测物体在触摸屏上的位置信息和角度信息,以生成与所述待检测 物体对应的虚拟图像信息;
当检测到发生图像处理事件时,接收图像处理信号并生成与所述图像处理信号匹配的图像识别码;
依序将所述图像识别码与至少一个所述身份识别码进行匹配;
当所述图像识别码与其中一个所述身份识别码匹配成功,则根据所述图像处理信号对目标检测物体的虚拟图像信息进行控制;
其中,所述目标检测物体包括与匹配成功的身份识别码对应的所述待检测物体。
第二方面,提供了一种基于触摸屏的主动式物体识别装置,包括:
识别模块,用于当检测到至少一个待检测物体发生触发事件时,输出与至少一个所述待检测物体一一对应的至少一个身份识别码;
图像检测模块,用于获取所述待检测物体在触摸屏上的位置信息和角度信息,以生成与所述待检测物体对应的虚拟图像信息;
图像识别模块,用于当检测到发生图像处理事件时,接收图像处理信号并生成与所述图像处理信号匹配的图像识别码;
图像匹配模块,用于依序将所述图像识别码与至少一个所述身份识别码进行匹配;以及
图像控制模块,用于当所述图像识别码与其中一个所述身份识别码匹配成功,则根据所述图像处理信号对目标检测物体的虚拟图像信息进行控制;
其中,所述目标检测物体包括与匹配成功的身份识别码对应的所述待检测物体。
第三方面,提供一种主动式物体识别系统,包括:
触摸屏和如上所述的主动式物体识别装置,其中所述主动式物体识别装置与触摸屏连接;
所述主动式物体识别装置用于当检测到图像处理事件和至少一个待检测物体的触发事件时,则根据图像处理信号对于待控制的待检测物体的虚拟图像信息进行控制。
发明的有益效果
对附图的简要说明
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍。
图1是本申请一实施例提供的基于触摸屏的主动式物体识别方法的具体实现流程图;
图2是本申请一实施例提供的图1中基于触摸屏的主动式物体识别方法S105的具体实现流程图;
图3是本申请一实施例提供的基于触摸屏的主动式物体识别方法的另一种具体实现流程图;
图4是本申请一实施例提供的基于触摸屏的主动式物体识别方法的另一种具体实现流程图;
图5是本申请一实施例提供的图4中基于触摸屏的主动式物体识别方法S403的具体实现流程图;
图6是本申请一实施例提供的封闭环形区域与虚拟图像信息的相对位置关系示意图;
图7是本申请一实施例提供的基于触摸屏的主动式物体识别方法的另一种具体实现流程图;
图8是本申请一实施例提供的图1中基于触摸屏的主动式物体识别方法S102的具体实现流程图;
图9是本申请一实施例提供的基于触摸屏的主动式物体识别方法的另一种具体实现流程图;
图10是本申请一实施例提供的基于触摸屏的主动式物体识别方法的另一种具体实现流程图;
图11是本申请一实施例提供的图1中基于触摸屏的主动式物体识别方法S102的另一种具体实现流程图;
图12是本申请一实施例提供的图1中基于触摸屏的主动式物体识别方法S102的另一种具体实现流程图;
图13是本申请一实施例提供的图1中基于触摸屏的主动式物体识别方法S105的另一种具体实现流程图;
图14是本申请一实施例提供的基于触摸屏的主动式物体识别装置的结构示意图;
图15是本申请一实施例提供的主动式物体识别系统的结构示意图。
发明实施例
本发明的实施方式
图1示出了本实施例提供的基于触摸屏的主动式物体识别方法的具体实现流程,其中通过主动式物体识别方法能够对于至少一个待检测物体进行识别并且检测;需要说明的是,本文中的“待检测物体”包括任意类型的物体,比如手机、玩具、化妆品、珠宝、酒水等。
其中,上述主动式物体识别方法包括如下步骤:
S101:当检测到至少一个待检测物体发出触发事件时,则输出至少一个待检测物体一一对应的至少一个身份识别码。其中触摸屏具有触发检测功能,当待检测物体触发触摸屏时,在触摸屏上检测到待检测物体发出的触发事件,则需要启动对于待检测物体的识别过程,提高了对于物体的识别控制精度和效率;其中身份识别码作为待检测物体的标识符,多个待检测物体与多个身份识别码一一对应设置,进而根据身份识别码能够精确地辨别出待检测物体,有助于对于多个待检测物体的精确识别和控制功能,保障了主动式物体识别方法的适用范围。
S102:获取待检测物体在触摸屏上的位置信息和角度信息,以生成与待检测物体对应的虚拟图像信息。
当待检测物体位于触摸屏上,则能够感知待检测物体的触摸屏上的实际位置和状态,以便于对于待检测物体进行更加准确、快速的控制;并且根据待检测物体的实际状态信息得到相应的虚拟图像信息,进而在虚拟环境中能够更加精确地对于待检测物体进行虚拟化操作,以模拟真实环境中的操作控制步骤,人机交互体验更佳。
S103:当检测到发生图像处理事件时,接收图像处理信号并生成与图像处理信 号匹配的图像识别码。
当在触摸屏上检测到图像处理事件时,则启动对于待检测物体的虚拟图像信息的识别和操作过程,其中图像处理信号包含图像处理信息,通过图像处理信息能够对于图像像素进行精确、同步地操作;可选的,图像处理信号代表用户的图像操作需求;其中图像处理信号与图像识别码具有一一对应关系,进而通过图像识别码能够从至少一个待检测物体识别出特定的待检测物体,以实现对于待检测物体的精度、灵活识别控制功能。
S104:依序将图像识别码与至少一个身份识别码进行匹配。
其中图像识别码与身份识别码具有数据对应功能,进而通过图像识别码与身份识别码进行数据比对,以判断与身份识别码匹配的待检测物体是否为待控制的待检测物体,达到了精确识别的功能,避免出现识别控制误差。
S105:当图像识别码与其中一个身份识别码匹配成功,则根据图像处理信号对目标检测物体的虚拟图像信息进行控制。
其中目标检测物体包括与匹配成功的身份识别码对应的待检测物体。
示例性的,当在触摸屏上检测到3个待检测物体的触发事件时,这3个待检测物体分别为:待检测物体A、待检测物体B以及待检测物体C,当图像识别码与待检测物体A的身份识别码未匹配成功,则继续对待检测物体B进行匹配识别;当图像识别码与待检测物体B的身份识别码匹配成功,则说明待检测物体B为目标检测物体,此时根据图像处理信号对于待检测物体B的虚拟图像信息进行电路控制,以满足虚拟操作控制需求,在虚拟的模拟环境中能够完全模拟出在真实环境中的电路控制体验,人机交互性能更佳。
在图1示出主动式物体识别方法的具体实施流程,分别对于每一个待检测物体进行身份编码,以得到相应的身份识别码,当需要对于待检测物体进行虚拟操作控制时,经过图像处理信号与身份识别码之间的匹配过程,以寻找到目标检测物体,进而根据图像处理信号对于目标检测物体的虚拟图像信息进行自适应控制,简化了对于物体的识别步骤,具有更加真实的虚拟操作体验,提高了对于待检测物体的识别控制精度;解决了传统技术对于物体的识别误差较大,难以普遍适用的问题。
图2示出了本实施例提供在图1的S105中,根据图像处理信号对目标检测物体的虚拟图像信息进行控制,具体包括:
S1051:对图像处理信号进行解析得到移动控制指令、音频控制指令和文字控制指令。
其中图像处理信号包括全方位的控制信息,通过对于图像处理信号进行解析后,能够更加精确地读取这些控制信息,以实现对于物体的快速、精确控制功能,以满足用户的多方位电路控制需求,主动式物体识别方法具有更高的实用价值。
S1052:根据移动控制指令控制目标检测物体的虚拟图像信息按照预设的方向进行移动;或者根据音频控制指令播放预设的音频内容;或者根据文字控制指令显示预设的文字内容。
可选的,根据移动控制指令控制目标检测物体的虚拟图像信息按照预设的方向和预设的速率进行移动。
其中通过移动控制指令能够驱动虚拟图像信息进行自适应移动,以改变虚拟图像信息的实际位置,以满足用户的观赏需求;其中通过音频控制指令能够实现音频驱动功能,当获取目标检测物体的虚拟图像信息时,则按照图像处理信号实现发声功能;示例性的,音频内容包括虚拟图像信息的相关形状信息等。
其中文字控制指令能够实现文字显示功能,以便于用户能够实时观赏到相关的目标检测物体的资料信息。
在本实施例中,通过图像处理信号能够对于虚拟图像信息进行位置移动,或者发出与虚拟图像信息相关的声音和文字信息,满足了用户的视觉、听觉等多方面感官体验,适用范围极广。
作为一种可选的实施方式,图3示出了本实施例提供的主动式物体识别方法的另一种具体实施步骤,请结合图1和图2,其中图3中的S301~S306图1和图2中的S101~S104以及S1051~S1052相对应,因此关于图3中S301~S306的具体实施方式此处将不再赘述;下面将重点论述S307;其中在根据移动控制指令控制目标检测物体按照预先的方向进行移动时,本实施例中的主动式物体识别方法还包括:
S307:获取并显示目标检测物体的虚拟图像信息在第一预设坐标系下的坐标。
可选的,当根据移动控制指令控制目标检测物体进行移动的过程中,记录目标检测物体的虚拟图像信息在预设时间段内的坐标变化值,并绘制出目标检测物体的虚拟图像信息在预设时间段内的轨迹,通过电子地图能够实时监控目标检测物体的虚拟图像信息的位置历史变化情况。
当根据移动控制指令对于虚拟图像信息的位置进行操作控制时,则将第一预设坐标系作为位置参考量,得到目标检测物体的虚拟图像信息在第一预设坐标下的实际位置,实现了对于虚拟图像信息的移动位置的实时追踪和控制功能,极大地保障了对于目标检测物体的虚拟图像信息的识别控制安全性和效率;并且将虚拟图像信息的位置进行显示。
作为一种可选的实施方式,当检测到多个待检测物体发生触发事件时,则根据多个待检测物体触发触摸屏的时间顺序生成多个身份识别码;其中多个待检测物体与多个身份识别码一一对应。
其中本实施例在触摸屏上能够实时检测待检测物体的触发状态,并且每一个待检测物体的身份识别码与待检测物体的触发时间存在关联性,这样既保障了对于每一个待检测物体的身份编码的精确性和高效性,以实现对于待检测物体的精确识别、控制功能,同时本实施例对于待检测物体进行身份编码与触摸屏的面积无关,那么通过触摸屏接收待检测物体的触发事件,可兼容任意数量的待检测物体的触发事件,并达到物体识别、控制功能,兼容性极强;从而本实施例中的主动式物体识别方法能够对于多个待检测物体进行识别控制,实用价值较高。
作为一种可选的实施方式,图4示出了本实施例提供的主动式物体识别方法的另一种实现流程,其中图4中的S401~S402以及S404~S406与图1中的S101~S05相对应,因此关于图4中的S401~S402以及S404~S406具体实施方式可参照图1的实施例,此处将不再赘述,下面将重点论述S403,其中在S402之后,主动式物体识别方法还包括:
S403:生成与虚拟图像信息匹配的光源提示信号。
由于根据每一个待检测物体在触摸屏上的实际状态信息,能够精确地得到对应的虚拟图像信息,进而能够对于虚拟图像信息进行同步控制,以达到精确地虚 拟电子控制效果,给用户带来了极大的便捷;本实施例通过光源提示信号能够显示相应的状态提示信息,以凸显虚拟图像信息所呈现的人机交互性能,以便于用户进行相应的虚拟图像控制功能,提高了主动式物体识别方法的人性化控制功能。
作为一种可选的实施方式,图5示出了本实施例提供的在图4中S403的具体实现流程,请参阅图5,S403具体包括:
S4031:获取虚拟图像信息对应的封闭环形区域,其中虚拟图像信息位于封闭环形区域的内部。
其中封闭环形区域对于虚拟图像信息进行围绕,以便于确定虚拟图像信息的位置边界,并且对于虚拟图像信息进行操控,提高了对于虚拟图像信息的控制精度和操控效率。
S4032:围绕封闭环形区域外部发出光源提示信号。
可选的,围绕封闭环形区域外部发出预设光强度和预设光亮度的光源提示信号。
示例性的,图6示出了本实施例提供的封闭环形区域602与虚拟图像信息601的相对位置关系示意图,由于虚拟图像信息601位于封闭环形区域602的外部,则围绕虚拟图像信息601的四周发出相应的光源提示信息,以满足用户的视觉需求。
作为一种可选的实施方式,图7示出了本实施例提供的主动式物体识别方法的另一种实现流程,图7中的S701~S702以及S705~707与图1中的S101~S105相对应,因此关于图7中的S701~S702以及S705~707可参照图1的实施例,此处不再赘述;下面重点论述S703和S704;在S702之后,主动式物体识别方法还包括
S703:生成与待检测物体关联的至少两个按键选择项。
其中每一项按键选择项包含特定的按键控制功能,当得到待检测物体的虚拟图像信息后,则通过按键选择项能够更加全面地对于虚拟图像信息的功能和特性进行解释、说明,以给用户带来更佳的使用体验,本实施例中的主动式物体识别方法具有更高的虚拟图像信息控制精度和控制效率。
S704:若其中一个按键选择项被触发,则显示按键选择项包含的媒体信息。
当按键选择项未被触发时,则不显示按键选择项包含的媒体信息。
可选的,媒体信息包括图像信息和音频信息等,当按键选择项被触发,则需要启动相应的媒体播放功能;其中媒体信息与待检测物体的虚拟图像信息存在关联,进而通过媒体信息能够有助于实时、精确地掌握虚拟图像信息的真实状态,便于虚拟图像信息的稳定、可靠识别控制功能。
作为一种可选的实施方式,图8示出了在图1中的S102,获取待检测物体在触摸屏上的位置信息和角度信息,具体包括:
S801:将触摸屏的检测区域划分3个目标识别范围,其中任意两个目标识别范围不重叠。
可选的,3个目标识别范围在触摸屏的检测区域均匀分布,以实现对于待检测物体的精确控制响应功能。
其中触摸屏的检测区域能够精确地感应待检测物体的触发事件,以启动对于待检测物体的识别控制过程;因此本实施例结合3个目标识别范围分别对于待检测物体进行触发检测,既保障了对于多个待检测物体的触发检测精度,又提高了对于待检测物体的触发检测效率,实用价值较高。
S802:当在任意一个目标识别范围检测到待检测物体的触发事件时,则获取待检测物体在对应的目标识别范围内的位置信息和角度信息。
由于本实施例将触摸屏的全部检测区域均分为3个目标识别范围,在每一个目标识别范围内能够检测到对应区域的待检测物体的触发事件,并精确地获取待对应的待检测物体的状态信息,以实现了对于多个待检测物体的快速识别和控制功能。
因此本实施例他通过3个目标识别范围精确地确定多个待检测物体在触摸屏的状态信息,简化了对于待检测物体的识别步骤,避免了对于待检测物体的识别控制误差,兼容性更高。
作为一种可选的实施方式,图9示出了本实施例提供的主动式物体识别方法的另一种实现流程,其中图9中的S901~S905与图1中的S101~S105相对应,因此关于图9中S901~S905的具体实施方式可参照图1的实施例,此处将不再赘述,下面将重点论述S906~S908,其中,当检测到至少一个待检测物体发出触发事件后, 主动式物体识别方法还包括:
S906:获取待检测物体的形状轮廓信息。
示例性,获取待检测物体在水平面的垂直投影,以得到待检测物体的投影轮廓,进而实时掌握待检测物体的性状信息,以便于对于待检测物体实时的操作识别,提高了识别的精确性。
S907:将形状轮廓信息与物品识别数据库中预先存储的多个第一轮廓信息依次进行匹配。
其中物品识别数据库预先存储在多个第一轮廓信息和多个形状类型,多个第一轮廓信息和多个形状类型之间具有一一对应关系,进而根据形状轮廓信息能够在物品识别数据库中进行形状识别,以完成对于待检测物体的类别归属操作,形状识别的精度和速率较高。
S908:若形状轮廓信息与其中一个第一轮廓信息匹配成功,则输出与第一轮廓信息对应的形状类型。
当形状轮廓信息与预先存储的轮廓信息相同,则说明待检测物体在物品识别数据库中形状识别成功,输出匹配成功的第一轮廓信息对应的形状类型,其中形状类型包括:长方形、菱形、椭圆形等,进而对于待检测物体进行形状识别后,能够更加全面地获取待检测物体的实际属性和功能,实现了对于待检测物体的全方位监控功能。
作为一种可选的实施方式,图10示出了本实施例提供的主动式物体识别方法的另一种具体实现流程,图10中的S1001~S1005与图1中的S101~S105相对应,关于图10中S1001~S1005的具体实施方式可参照图1的实施例,下面将重点论述S1006和S1007,其中本实施例中的主动式物体识别方法还包括:
S1006:当未检测到至少一个待检测物体的触发事件,则检测物体是否处于悬浮状态。
S1007:若检测到待检测物体处于悬浮状态,则在触摸屏的预设区域发出位置提示信息,其中预设区域用于容纳待检测物体。
示例性的,当待检测物体与触摸屏的检测区域的垂直距离处于预设的距离之内,则说明待检测物体是否处于悬浮状态,此时待检测物体并未触发触摸屏;则 在触摸屏的预设区域发出相应的提示信息,以便于对于待检测物体实现高效的识别控制功能。
作为一种可选的实施方式,图11示出了本实施例提供的图1中S102的另一种具体实现流程,请参阅图11,S102具体包括:
S1021:获取待检测物体在触摸屏上的至少一个触发位置,并生成至少一个触发位置在第二预设坐标系下的坐标,以得到位置信息。
当待检测物体触发触摸屏的检测区域时,则待检测物体触发触摸屏进行接触时,会产生至少一个触发位置,通过记录至少一个触发位置在参照坐标系下的实际坐标,以获取精确的位置信息。
S1022:获取待检测物体在触摸屏上的两个触发位置,并获取两个触发位置形成的直线与预先基准线之间的夹角,以得到角度信息。
示例性的,预先基准线为沿着预设的方位进行延伸;本实施例中两个触发位置形成的直线与预先基准线之间的夹角小于或者等于90度。
通过获取待检测物体与触摸屏的检测区域之间形成的接触直线,并且根据接触直线与预设基准之间的夹角得到待检测物体的实际角度值,以实现对于待检测物体的结构性状的精确监控。
S1023:根据位置信息和角度信息生成待检测物体的虚拟图像信息。
作为一种可选的实施方式,图12示出了本实施例提供的在图1的S102中,根据位置信息和角度信息生成待检测物体的虚拟图像信息,具体包括:
S1201:根据位置信息和角度信息构建待触摸区域。
通过在触摸屏上的检测区域得到待检测物体的待触摸区域,以实现对于待检测物体的精确定位功能,示例性,通过待检测物体在触摸屏上的触发位置进行集合处理,以实时得到待触摸区域。
S1202:在待触摸区域获取待检测物体的垂直投影平面图像。
其中,垂直投影平面图像能够精确地得到待触摸区域内的投影信息,有助于生成更加精确的虚拟图像信息。
S1203:根据位置信息和角度信息提取待检测物体的多个图像特征点。
其中图像特征点包括待检测物体的各种图像信息,以实现对于待检测物体的各 个部位的精确监控功能。
S1204:基于图像特征点将垂直投影平面图像在三维空间进行图像构建,得到待检测物体的三维虚拟图像,以生成与待检测物体的虚拟图像信息。
其中垂直投影平面图像位于触摸区域内,以便于对于平面图像进行图像还原操作,进而将垂直投影平面图像扩展延伸至三维空间内的图像,以提高虚拟图像信息的真实度和人机体验感,主动式物体识别方法能够对于三维虚拟图像进行识别和操控。
作为一种可选的实施方式,图13示出了本实施例提供的在图1的S105中,根据图像处理信号对目标检测物体的虚拟图像信息进行控制,具体包括:
S1301:检测目标检测物体的虚拟图像信息是否被预先锁定。
S1302:若目标检测物体的虚拟图像信息被预先锁定,当接收到解锁指令时,则根据图像处理信号对目标检测物体的虚拟图像信息进行控制。
S1303:若目标检测物体的虚拟图像信息未被预先锁定,则根据图像处理信号对目标检测物体的虚拟图像信息进行控制。
当图像识别码与目标检测物体的身份识别码匹配成功,则判断目标检测物体是否锁定,一旦目标检测物体的虚拟图像信息被锁定,则无法对虚拟图像信息进行操作,此时目标检测物体的虚拟图像信息处于不可更改状态,只有等待对于目标检测物体进行解锁以后,才能够实现对于目标检测物体的虚拟图像信息的电路控制功能,保障了对于虚拟图像信息的控制精度和安全性。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图14示出了本实施例提供的基于触摸屏的主动式物体识别装置140的结构示意,请参阅图14,主动式物体识别装置140包括:识别模块1401、图像检测模块1402、图像识别模块1403、图像匹配模块1404以及图像控制模块1405。
其中,识别模块1401用于当检测到至少一个待检测物体发生触发事件时,输出与至少一个待检测物体一一对应的至少一个身份识别码。
图像检测模块1402用于获取待检测物体在触摸屏上的位置信息和角度信息,以 生成与待检测物体对应的虚拟图像信息。
图像识别模块1403用于当检测到发生图像处理事件时,接收图像处理信号并生成与图像处理信号匹配的图像识别码。
图像匹配模块1404用于依序将图像识别码与至少一个身份识别码进行匹配。
图像控制模块1405用于当图像识别码与其中一个身份识别码匹配成功,则根据图像处理信号对目标检测物体的虚拟图像信息进行控制。
其中,目标检测物体包括与匹配成功的身份识别码对应的待检测物体。
作为一种可选的实施方式,请参阅图14,主动式物体识别装置140还包括:无线传输模块1406,无线传输模块1406用于将所述图像识别码无线传输至图像匹配模块1404。
因此本实施例中的图像匹配模块1404与图像识别模块1403能够进行无线传输,保障了主动式物体识别装置140的内部通信兼容性。
作为一种可选的实施方式,请参阅图14,主动式物体识别装置140还包括:电源模块1407与图像检测模块1402及图像识别模块1403连接,电源模块1407用于对图像检测模块1402和图像识别模块1403进行供电;保障了主动式物体识别装置140的内部供电安全性。
作为一种可选的实施方式,图像检测模块1402包括:陀螺仪;示例性的,陀螺仪的型号为:MPU6050。
作为一种可选的实施方式,无线传输模块1406包括:nFF24L01无线传输芯片。
作为一种可选的实施方式,识别模块1401包括:三个识别单元,其中三个识别单元分别对应设置于触摸屏三个检测区域,并且任意两个检测区域不重叠。
每个识别单元用于检测至少一个待检测物体在对应的检测区域发出的触发事件,并输出与至少一个待检测物体一一对应的至少一个身份识别码。
其中本实施例设置三个识别单元可对于待检测物体在触摸屏上触发状态的精确检测,提高了对于待检测物体的识别控制精度和效率。
作为一种可选的实施方式,请参阅图14,主动式物体识别装置140还包括显示模块1408,其中显示模块1408与图像控制模块1405连接,显示模块1408用于当目标检测物体的虚拟图像信息被控制时,则生成状态提示信息。
可选的,显示模块1408为显示屏,进而根据图像处理信号对于目标检测物体的虚拟图像信息进行电路控制时,则显示屏发出状态提示信息,以指示待检测物体的实际工作状态,提高了主动式物体识别装置140的实用价值。
需要说明的是,图14中的基于触摸屏的主动式物体识别装置140与图1至图13中基于触摸屏的主动式物体识别方法相对应,因此关于图14中主动式物体识别装置140的具体实施方式可参照图1至图13的实施例此处将不再赘述。
图15示出了本实施例提供的主动式物体识别系统150的结构示意,其中,主动式物体识别系统150包括:包括:触摸屏1501和如上所述的主动式物体识别装置140,其中主动式物体识别装置140与触摸屏1501连接。
主动式物体识别装置140用于当检测到图像处理事件和至少一个待检测物体的触发事件时,则根据图像处理信号对于待控制的待检测物体的虚拟图像信息进行控制;进而当多个待检测物体触发触摸屏1501时,则根据用户的实际电路功能需求对于待检测物体进行识别以及虚拟图像操控。
其中图15中主动式物体识别系统150的具体实施方式可参照图1至图14的实施例,此处将不再赘述。
以上仅为本申请的可选实施例而已,并不用于限制本申请。对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (20)

  1. 一种基于触摸屏的主动式物体识别方法,其特征在于,包括:
    当检测到至少一个待检测物体发生触发事件时,输出与至少一个所述待检测物体一一对应的至少一个身份识别码;
    获取所述待检测物体在触摸屏上的位置信息和角度信息,以生成与所述待检测物体对应的虚拟图像信息;
    当检测到发生图像处理事件时,接收图像处理信号并生成与所述图像处理信号匹配的图像识别码;
    依序将所述图像识别码与至少一个所述身份识别码进行匹配;
    当所述图像识别码与其中一个所述身份识别码匹配成功,则根据所述图像处理信号对目标检测物体的虚拟图像信息进行控制;
    其中,所述目标检测物体包括与匹配成功的身份识别码对应的所述待检测物体。
  2. 根据权利要求1所述的主动式物体识别方法,其特征在于,所述根据所述图像处理信号对目标检测物体的虚拟图像信息进行控制,具体包括:
    对所述图像处理信号进行解析得到移动控制指令、音频控制指令以及文字控制指令;
    根据所述移动控制指令控制所述目标检测物体的虚拟图像信息按照预设的方向进行移动,或者根据所述音频控制指令播放预设的音频内容,或者根据所述文字控制指令显示预设的文字内容。
  3. 根据权利要求2所述的主动式物体识别方法,其特征在于,在根据所述移动控制指令控制所述目标检测物体按照预先的方向进行移动时,所述主动式物体识别方法还包括:
    获取并显示所述目标检测物体的虚拟图像信息在第一预设坐标系下的坐标。
  4. 根据权利要求1所述的主动式物体识别方法,其特征在于,当检测到多个所述待检测物体发生触发事件时,则根据多个所述待检测 物体触发所述触摸屏的时间顺序生成多个所述身份识别码;其中多个所述待检测物体与多个所述身份识别码一一对应。
  5. 根据权利要求1所述的主动式物体识别方法,其特征在于,在获取所述待检测物体在所述触摸屏上的位置信息和角度信息,以生成与所述待检测物体对应的虚拟图像信息之后,所述主动式物体识别方法还包括:
    生成与所述虚拟图像信息匹配的光源提示信号。
  6. 根据权利要求5所述的主动式物体识别方法,其特征在于,生成与所述虚拟图像信息匹配的光源提示信号,具体包括:
    获取所述虚拟图像信息对应的封闭环形区域,其中所述虚拟图像信息位于所述封闭环形区域的内部;
    围绕所述封闭环形区域外部发出所述光源提示信号。
  7. 根据权利要求1所述的主动式物体识别方法,其特征在于,在生成与所述待检测物体对应的虚拟图像信息后,所述主动式物体识别方法还包括:
    生成与所述待检测物体关联的至少两个按键选择项;
    若其中一个按键选择项被触发,则显示所述按键选择项包含的媒体信息。
  8. 根据权利要求1所述的主动式物体识别方法,其特征在于,获取所述待检测物体在所述触摸屏上的位置信息和角度信息,具体包括:
    将所述触摸屏的检测区域划分3个目标识别范围,其中任意两个所述目标识别范围不重叠;
    当在任意一个所述目标识别范围检测到所述待检测物体的触发事件时,则获取所述待检测物体在对应的目标识别范围内的位置信息和角度信息。
  9. 根据权利要求1所述的主动式物体识别方法,其特征在于,当检测到至少一个所述待检测物体发出触发事件后,所述主动式物体识 别方法还包括:
    获取所述待检测物体的形状轮廓信息;
    将所述形状轮廓信息与物品识别数据库中预先存储的多个第一轮廓信息依次进行匹配;
    若所述形状轮廓信息与其中一个所述第一轮廓信息匹配成功,则输出与所述第一轮廓信息对应的形状类型。
  10. 根据权利要求1所述的主动式物体识别方法,其特征在于,还包括:
    当未检测到至少一个所述待检测物体的触发事件,则检测所述待检测物体是否处于悬浮状态;
    若检测到所述待检测物体处于悬浮状态,则在所述触摸屏的预设区域发出位置提示信息,其中所述预设区域用于容纳所述待检测物体。
  11. 根据权利要求1所述的主动式物体识别方法,其特征在于,所述获取所述待检测物体在所述触摸屏上的位置信息和角度信息,以生成与所述待检测物体对应的虚拟图像信息,具体包括:
    获取所述待检测物体在所述触摸屏上的至少一个触发位置,并生成至少一个所述触发位置在第二预设坐标系下的坐标,以得到所述位置信息;
    获取所述待检测物体在所述触摸屏上的两个触发位置,并获取两个所述触发位置形成的直线与预先基准线之间的夹角,以得到所述角度信息;
    根据所述位置信息和所述角度信息生成所述待检测物体的虚拟图像信息。
  12. 根据权利要求1所述的主动式物体识别方法,其特征在于,所述根据所述位置信息和所述角度信息生成所述待检测物体对应的虚拟图像信息,具体包括:
    根据所述位置信息和所述角度信息构建待触摸区域;
    在所述待触摸区域获取所述待检测物体的垂直投影平面图像;
    根据所述位置信息和所述角度信息提取所述待检测物体的多个图像特征点;
    基于所述图像特征点将所述垂直投影平面图像在三维空间进行图像构建,得到所述待检测物体的三维虚拟图像,以生成与所述待检测物体的虚拟图像信息。
  13. 根据权利要求1所述的主动式物体识别方法,其特征在于,根据所述图像处理信号对所述目标检测物体的虚拟图像信息进行控制,具体包括:
    检测所述目标检测物体的虚拟图像信息是否被预先锁定;
    若所述目标检测物体的虚拟图像信息被预先锁定,当接收到解锁指令时,则根据所述图像处理信号对所述目标检测物体的虚拟图像信息进行控制;
    若所述目标检测物体的虚拟图像信息未被预先锁定,则根据所述图像处理信号对所述目标检测物体的虚拟图像信息进行控制。
  14. 一种基于触摸屏的主动式物体识别装置,其特征在于,包括:
    识别模块,用于当检测到至少一个待检测物体发生触发事件时,输出与至少一个所述待检测物体一一对应的至少一个身份识别码;
    图像检测模块,用于获取所述待检测物体在触摸屏上的位置信息和角度信息,以生成与所述待检测物体对应的虚拟图像信息;
    图像识别模块,用于当检测到发生图像处理事件时,接收图像处理信号并生成与所述图像处理信号匹配的图像识别码;
    图像匹配模块,用于依序将所述图像识别码与至少一个所述身份识别码进行匹配;以及
    图像控制模块,用于当所述图像识别码与其中一个所述身份识别码匹配成功,则根据所述图像处理信号对目标检测物体的虚拟图像信息进行控制;
    其中,所述目标检测物体包括与匹配成功的身份识别码对应的所述待检测物体。
  15. 根据权利要求14所述的主动式物体识别装置,其特征在于,还包括:
    无线传输模块,用于将所述图像识别码无线传输至所述图像匹配模块。
  16. 根据权利要求15所述的主动式物体识别装置,其特征在于,所述无线传输模块包括:nFF24L01无线传输芯片。
  17. 根据权利要求14所述的主动式物体识别装置,其特征在于,还包括:
    电源模块,与所述图像检测模块及所述图像识别模块连接,所述电源模块用于对所述图像检测模块和所述图像识别模块进行供电。
  18. 根据权利要求14所述的主动式物体识别装置,其特征在于,所述识别模块包括:三个识别单元,其中三个所述识别单元分别对应设置于所述触摸屏三个检测区域,并且任意两个所述检测区域不重叠;
    每个所述识别单元用于检测至少一个所述待检测物体在对应的检测区域发出的触发事件,并输出与至少一个所述待检测物体一一对应的至少一个身份识别码。
  19. 根据权利要求14所述的主动式物体识别装置,其特征在于,还包括:
    显示模块,与所述图像控制模块连接,所述显示模块用于当所述目标检测物体的虚拟图像信息被控制时,则生成状态提示信息。
  20. 一种主动式物体识别系统,其特征在于,包括:
    触摸屏和如权利要求14所述的主动式物体识别装置,其中所述主动式物体识别装置与触摸屏连接;所述主动式物体识别装置用于当检测到图像处理事件和至少一个待检测物体的触发事件时,则 根据图像处理信号对于待控制的待检测物体的虚拟图像信息进行控制。
PCT/CN2019/105349 2019-09-11 2019-09-11 主动式物体识别方法、物体识别装置以及物体识别系统 WO2021046747A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980002036.4A CN110799987B (zh) 2019-09-11 2019-09-11 主动式物体识别方法、物体识别装置以及物体识别系统
PCT/CN2019/105349 WO2021046747A1 (zh) 2019-09-11 2019-09-11 主动式物体识别方法、物体识别装置以及物体识别系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105349 WO2021046747A1 (zh) 2019-09-11 2019-09-11 主动式物体识别方法、物体识别装置以及物体识别系统

Publications (1)

Publication Number Publication Date
WO2021046747A1 true WO2021046747A1 (zh) 2021-03-18

Family

ID=69448532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105349 WO2021046747A1 (zh) 2019-09-11 2019-09-11 主动式物体识别方法、物体识别装置以及物体识别系统

Country Status (2)

Country Link
CN (1) CN110799987B (zh)
WO (1) WO2021046747A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024426A (zh) * 2021-11-10 2022-02-08 北京航空航天大学 一种直线电机新型信息编码器及直线电机检测系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114237483B (zh) * 2022-02-25 2022-05-17 深圳数字视界科技有限公司 触摸物体智能识别桌及其智能控制方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4423302B2 (ja) * 2002-11-25 2010-03-03 日本電信電話株式会社 実世界オブジェクト認識方法および実世界オブジェクト認識システム
CN102760227A (zh) * 2012-03-06 2012-10-31 联想(北京)有限公司 一种电子设备、待识别物体及其识别方法
CN104036226A (zh) * 2013-03-04 2014-09-10 联想(北京)有限公司 一种目标物信息获取方法及电子设备
CN104205124A (zh) * 2012-01-20 2014-12-10 金铎 辨识对象的系统及其方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519817A (zh) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 基于增强现实的交互方法、装置、存储介质及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4423302B2 (ja) * 2002-11-25 2010-03-03 日本電信電話株式会社 実世界オブジェクト認識方法および実世界オブジェクト認識システム
CN104205124A (zh) * 2012-01-20 2014-12-10 金铎 辨识对象的系统及其方法
CN102760227A (zh) * 2012-03-06 2012-10-31 联想(北京)有限公司 一种电子设备、待识别物体及其识别方法
CN104036226A (zh) * 2013-03-04 2014-09-10 联想(北京)有限公司 一种目标物信息获取方法及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114024426A (zh) * 2021-11-10 2022-02-08 北京航空航天大学 一种直线电机新型信息编码器及直线电机检测系统

Also Published As

Publication number Publication date
CN110799987B (zh) 2023-05-02
CN110799987A (zh) 2020-02-14

Similar Documents

Publication Publication Date Title
US11699271B2 (en) Beacons for localization and content delivery to wearable devices
US20180232608A1 (en) Associating semantic identifiers with objects
CN104919823B (zh) 具有智能方向性会议的装置及系统
US9247303B2 (en) Display apparatus and user interface screen providing method thereof
US20150185825A1 (en) Assigning a virtual user interface to a physical object
EP3037917B1 (en) Monitoring
US10540543B2 (en) Human-computer-interaction through scene space monitoring
KR102251253B1 (ko) 가상 환경에서 제스처 기반 액세스 제어
US9632592B1 (en) Gesture recognition from depth and distortion analysis
US11107367B2 (en) Adaptive assembly guidance system
CN103631768A (zh) 协作数据编辑和处理系统
KR20060126727A (ko) 3차원 모션 기술을 이용하는 홈 엔터테인먼트용 개선된제어 장치
US9874977B1 (en) Gesture based virtual devices
US9477302B2 (en) System and method for programing devices within world space volumes
CN108351791A (zh) 具有用户输入配件的计算设备
WO2021046747A1 (zh) 主动式物体识别方法、物体识别装置以及物体识别系统
CN109313532A (zh) 信息处理设备、信息处理方法和程序
CN110168490A (zh) 显示装置及其控制方法
CN113805770B (zh) 一种光标的移动方法及电子设备
Wang et al. A gesture-based method for natural interaction in smart spaces
US20220405317A1 (en) Remote Control Device with Environment Mapping
Rehem Neto et al. Touch the air: an event-driven framework for interactive environments
CN204990352U (zh) 电子学习系统
US11269789B2 (en) Managing connections of input and output devices in a physical room
Ventes et al. A Programming Library for Creating Tangible User Interfaces

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944923

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19944923

Country of ref document: EP

Kind code of ref document: A1