CN110969159A - Image recognition method and device and electronic equipment - Google Patents

Image recognition method and device and electronic equipment Download PDF

Info

Publication number
CN110969159A
CN110969159A CN201911087426.4A CN201911087426A CN110969159A CN 110969159 A CN110969159 A CN 110969159A CN 201911087426 A CN201911087426 A CN 201911087426A CN 110969159 A CN110969159 A CN 110969159A
Authority
CN
China
Prior art keywords
target
position information
acquisition device
image
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911087426.4A
Other languages
Chinese (zh)
Other versions
CN110969159B (en
Inventor
冯瑞丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911087426.4A priority Critical patent/CN110969159B/en
Publication of CN110969159A publication Critical patent/CN110969159A/en
Application granted granted Critical
Publication of CN110969159B publication Critical patent/CN110969159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure provides an image identification method, an image identification device and electronic equipment, belonging to the field of image identification, wherein the method comprises the following steps: respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point; determining target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device; searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device; identifying target information corresponding to the target point in the target image. By the processing scheme, the accuracy of image recognition is improved.

Description

Image recognition method and device and electronic equipment
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to an image recognition method and apparatus, and an electronic device.
Background
The reading pen enriches the experience of children by enabling the children to participate in various targeted games and activities and continuously stimulating senses such as touch, vision, hearing and the like, increases the interest of the children and develops the cranial nerves of the children. The point-reading pen is small and convenient, is very portable, can be used at any time and anywhere, namely, the point-reading pen can pronounce, adds sound to boring characters, enriches book contents, makes reading and learning more interesting, and can fully realize edutainment.
The reading pen can be said to be a learning tool with high skills breaking through the traditional thinking, improves the learning interest of children by pointing to and reading the place and combining the learning method of listening to, speaking and reading, stimulates the development of the right brain, learns in happiness, absorbs textbook knowledge, and ensures that the improvement of the learning score is not difficult any more. And it is small in size, easy to carry, and can be used in school or out of class.
The existing text point reading scheme for education products is characterized in that the 'where the finger points to read' is captured based on a monocular camera, image matching is carried out through a finger model algorithm, a finger point is captured, and a text or characters pointed by the finger are found.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide an image recognition method, an image recognition apparatus and an electronic device to at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides an image recognition method, where the method includes:
respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point;
determining target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device;
searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device;
identifying target information corresponding to the target point in the target image.
According to a specific implementation manner of the embodiment of the present disclosure, the obtaining first projection point position information and second projection point position information formed by the first image capturing device and the second image capturing device for the target point respectively includes:
establishing a first space coordinate system by taking the center of the projection plane of the first image acquisition device as a coordinate origin;
establishing a second space coordinate system by taking the center of the screen projection plane of the second image acquisition device as a coordinate origin;
and determining the position information of the first projection point and the position information of the second projection point based on the first space coordinate system and the second space coordinate system.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the position information of the first projection point and the position information of the second projection point based on the first spatial coordinate system and the second spatial coordinate system includes:
acquiring a first projection coordinate of the target point on a projection plane of the first image acquisition device;
acquiring a second projection coordinate of the target point on a projection plane of the second image acquisition device;
determining the first projection point position information and the second projection point position information based on the first projection coordinate and the second projection coordinate, respectively.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image capturing device, and the position information of the second image capturing device includes:
respectively acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device;
determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
and determining the position of the target point based on the first included angle and the second included angle.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the position of the target point based on the first angle and the second angle includes:
determining a first distance from the target point to the first image acquisition device and a second distance from the target point to the second image acquisition device based on the first included angle and the second included angle;
determining a position of the target point based on the first distance and the second distance.
According to a specific implementation manner of the embodiment of the present disclosure, searching for a target image corresponding to the target position information in images formed by the first image capturing device and the second image capturing device includes:
searching images formed by the first image acquisition device and the second image acquisition device within a preset range by taking the target position information as a center;
and taking the searched images in the preset range formed by the first image acquisition device and the second image acquisition device as the target images.
According to a specific implementation manner of the embodiment of the present disclosure, the identifying, in the target image, target information corresponding to the target point includes:
performing image recognition on the content in the target image to form image recognition information;
and selecting the information closest to the target point from the image identification information to form the target information.
According to a specific implementation manner of the embodiment of the present disclosure, after the target information corresponding to the target point is identified in the target image, the method further includes:
and playing the target information in a voice mode.
In a second aspect, an embodiment of the present disclosure provides an image recognition apparatus, including:
the acquisition module is used for respectively acquiring first projection point position information and second projection point position information which are formed by the first image acquisition device and the second image acquisition device aiming at a target point;
a determining module, configured to determine target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image capturing device, and the position information of the second image capturing device;
the searching module is used for searching a target image corresponding to the target position information in the images formed by the first image acquisition device and the second image acquisition device;
and the identification module is used for identifying target information corresponding to the target point in the target image.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition method of the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the image recognition method of the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present disclosure also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, which, when executed by a computer, cause the computer to perform the image recognition method in the foregoing first aspect or any implementation manner of the first aspect.
The image identification scheme in the embodiment of the disclosure includes acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point respectively;
determining target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device; searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device; identifying target information corresponding to the target point in the target image. By the aid of the processing scheme, accuracy of image recognition can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an image recognition method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an image recognition method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of another image recognition method provided by the embodiments of the present disclosure;
FIG. 4 is a flow chart of another image recognition method provided by the embodiments of the present disclosure;
fig. 5 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an image identification method. The image recognition method provided by the embodiment may be executed by a computing device, which may be implemented as software or as a combination of software and hardware, and may be integrally provided in a server, a client, or the like.
Referring to fig. 1 and 2, an image recognition method in an embodiment of the present disclosure may include the following steps:
s101, respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point.
The existing text point reading scheme for education products is characterized in that the 'where the finger points to read' is captured based on a monocular camera, image matching is carried out through a finger model algorithm, a finger point is captured, and a text or characters pointed by the finger are found. By the scheme, the influence of ambient light and the contrast of the fingers and books can be avoided, the fingers can be accurately positioned, and the text point reading function is realized.
Specifically, a first image capturing device and a second image capturing device may be provided during the process of performing an image, the first image capturing device may be one (e.g., a camera) of the binocular imaging devices, and the second image capturing device may be another (e.g., a camera) of the binocular imaging devices. The target point may be a point-and-read guide object (e.g., a finger or a point-and-read pen, etc.) on a target object (e.g., a point-and-read book or a screen of a point-and-read device) by a user. The position information of the first projection point and the position information of the second projection point can be obtained by collecting the position information of the projection point formed by the target point on the first image acquisition device and the second image acquisition device.
S102, determining target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device.
After the position information of the first projection point, the position information of the second projection point, the position information of the first image acquisition device and the position information of the second image acquisition device are obtained, the target point can be accurately positioned in space based on the information, so that the position information of the target point relative to the first image acquisition device and the second image acquisition device is obtained.
In particular, see FIG. 2, O1,O2The positions of the two cameras (the first image acquisition device and the second image acquisition device) are respectively, and the arrangement angles of the cameras are known. P1,P2Is the projected point of the target point object P on the camera imaging plane, i.e. the point where P was captured on the photograph. According to the arrangement positions of the first image acquisition device and the second image acquisition device and the positions of the points P1 and P2, the three-dimensional position of the point P in the space can be easily calculated, and further the target position information of the target point can be determined.
S103, searching a target image corresponding to the target position information in the images formed by the first image acquisition device and the second image acquisition device.
The first image acquisition device and the second image acquisition device acquire images of other objects in the visual field simultaneously in addition to the image of the target point. For example, when a user performs a touch-reading operation on a screen of a touch-reading book or a touch-reading machine through a finger or a touch-reading pen, the first image acquisition device and the second image acquisition device may acquire part or all of touch-reading contents on the screen of the touch-reading book or the touch-reading machine in addition to the image of the target point, and at this time, the touch-reading contents may be searched.
Therefore, the images formed by the first image acquisition device and the second image acquisition device can be searched in a preset range by taking the target position information formed by the target point as the center, so that the image in the preset range is used as the target image corresponding to the target position information. The target image may include all the click-to-read contents in the current view, or may include some of the click-to-read contents in the current view.
And S104, identifying target information corresponding to the target point in the target image.
Target information corresponding to the target point can be obtained by identifying the content in the target image, and the target information can be one or a combination of characters, pictures and the like based on different contents indicated by the target point.
Through the content in the above embodiment, the first image capturing device and the second image capturing device are used together to determine the position of the target, so that the accuracy of identifying the target point is improved.
Referring to fig. 3, according to a specific implementation manner of the embodiment of the present disclosure, the obtaining first projection point position information and second projection point position information formed by the first image capturing device and the second image capturing device for the target point respectively includes:
s301, establishing a first space coordinate system by taking the center of the projection plane of the first image acquisition device as a coordinate origin.
The first spatial coordinate system may comprise coordinates in three spatial directions of x, y and z, and the projection plane of the first image capturing device may be taken as the x, y direction and the projection plane perpendicular to the first image capturing device may be taken as the z direction.
And S302, establishing a second space coordinate system by taking the center of the screen projection plane of the second image acquisition device as a coordinate origin.
The second spatial coordinate system may comprise coordinates in three spatial directions x, y and z, and the projection plane of the first image capturing device may be taken as the x, y direction and the projection plane perpendicular to the second image capturing device may be taken as the z direction.
S303, determining the position information of the first projection point and the position information of the second projection point based on the first spatial coordinate system and the second spatial coordinate system.
Specifically, a first projection coordinate of the target point on a projection plane of the first image acquisition device may be obtained; acquiring a second projection coordinate of the target point on a projection plane of the second image acquisition device; determining the first projection point position information and the second projection point position information based on the first projection coordinate and the second projection coordinate, respectively.
With the above-described embodiment, the spatial position of the target point can be determined based on the set first spatial coordinate system and second spatial coordinate system.
Referring to fig. 4, according to a specific implementation manner of the embodiment of the present disclosure, the determining the target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image capturing device, and the position information of the second image capturing device includes:
s401, respectively acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device;
s402, determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
s403, determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
s404, determining the position of the target point based on the first included angle and the second included angle.
Specifically, a first distance from the target point to the first image capturing device and a second distance from the target point to the second image capturing device may be determined based on the first angle and the second angle, and the position of the target point may be determined based on the first distance and the second distance.
By the above-mentioned facts, the position of the target point can be accurately calculated.
According to a specific implementation manner of the embodiment of the present disclosure, searching for a target image corresponding to the target position information in images formed by the first image capturing device and the second image capturing device includes: searching images formed by the first image acquisition device and the second image acquisition device within a preset range by taking the target position information as a center; and taking the searched images in the preset range formed by the first image acquisition device and the second image acquisition device as the target images.
According to a specific implementation manner of the embodiment of the present disclosure, the identifying, in the target image, target information corresponding to the target point includes: performing image recognition on the content in the target image to form image recognition information; and selecting the information closest to the target point from the image identification information to form the target information.
According to a specific implementation manner of the embodiment of the present disclosure, after the target information corresponding to the target point is identified in the target image, the method further includes: and playing the target information in a voice mode.
Corresponding to the above method embodiment, referring to fig. 5, the disclosed embodiment further provides an image recognition apparatus 50, including:
an obtaining module 501, configured to obtain first projection point position information and second projection point position information formed by a first image collecting device and a second image collecting device for a target point, respectively;
a determining module 502, configured to determine target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image capturing device, and the position information of the second image capturing device;
a searching module 503, configured to search for a target image corresponding to the target position information in images formed by the first image capturing device and the second image capturing device;
an identifying module 504, configured to identify target information corresponding to the target point in the target image.
For parts not described in detail in this embodiment, reference is made to the contents described in the above method embodiments, which are not described again here.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition method of the preceding method embodiment.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the image recognition method in the aforementioned method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the image recognition method in the aforementioned method embodiments.
Referring now to FIG. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. The processing device 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. An image recognition method, comprising:
respectively acquiring first projection point position information and second projection point position information formed by a first image acquisition device and a second image acquisition device aiming at a target point;
determining target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image acquisition device and the position information of the second image acquisition device;
searching a target image corresponding to the target position information in images formed by the first image acquisition device and the second image acquisition device;
identifying target information corresponding to the target point in the target image.
2. The method according to claim 1, wherein the obtaining of the first and second projection point position information formed by the first and second image capturing devices for the target point respectively comprises:
establishing a first space coordinate system by taking the center of the projection plane of the first image acquisition device as a coordinate origin;
establishing a second space coordinate system by taking the center of the screen projection plane of the second image acquisition device as a coordinate origin;
and determining the position information of the first projection point and the position information of the second projection point based on the first space coordinate system and the second space coordinate system.
3. The method of claim 2, wherein determining the first and second projected point location information based on the first and second spatial coordinate systems comprises:
acquiring a first projection coordinate of the target point on a projection plane of the first image acquisition device;
acquiring a second projection coordinate of the target point on a projection plane of the second image acquisition device;
determining the first projection point position information and the second projection point position information based on the first projection coordinate and the second projection coordinate, respectively.
4. The method of claim 1, wherein determining the target location information of the target point based on the first projection point location information, the second projection point location information, the location information of the first image acquisition device, and the location information of the second image acquisition device comprises:
respectively acquiring first camera position information and second camera position information contained in the first image acquisition device and the second image acquisition device;
determining a first included angle formed by the target point on the first image acquisition device based on the first camera position information, the first projection point position information and the position information of the first image acquisition device;
determining a second included angle formed by the target point on the second image acquisition device based on the second camera position information, the second projection point position information and the position information of the second image acquisition device;
and determining the position of the target point based on the first included angle and the second included angle.
5. The method of claim 4, wherein determining the location of the target point based on the first angle and the second angle comprises:
determining a first distance from the target point to the first image acquisition device and a second distance from the target point to the second image acquisition device based on the first included angle and the second included angle;
determining a position of the target point based on the first distance and the second distance.
6. The method according to claim 1, wherein the searching for the target image corresponding to the target position information in the image formed by the first image capturing device and the second image capturing device comprises:
searching images formed by the first image acquisition device and the second image acquisition device within a preset range by taking the target position information as a center;
and taking the searched images in the preset range formed by the first image acquisition device and the second image acquisition device as the target images.
7. The method of claim 1, wherein identifying target information corresponding to the target point in the target image comprises:
performing image recognition on the content in the target image to form image recognition information;
and selecting the information closest to the target point from the image identification information to form the target information.
8. The method of claim 1, wherein after identifying the target information corresponding to the target point in the target image, the method further comprises:
and playing the target information in a voice mode.
9. An image recognition apparatus, comprising:
the acquisition module is used for respectively acquiring first projection point position information and second projection point position information which are formed by the first image acquisition device and the second image acquisition device aiming at a target point;
a determining module, configured to determine target position information of the target point based on the first projection point position information, the second projection point position information, the position information of the first image capturing device, and the position information of the second image capturing device;
the searching module is used for searching a target image corresponding to the target position information in the images formed by the first image acquisition device and the second image acquisition device;
and the identification module is used for identifying target information corresponding to the target point in the target image.
10. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition method of any one of the preceding claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the image recognition method of any one of the preceding claims 1-8.
CN201911087426.4A 2019-11-08 2019-11-08 Image recognition method and device and electronic equipment Active CN110969159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911087426.4A CN110969159B (en) 2019-11-08 2019-11-08 Image recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911087426.4A CN110969159B (en) 2019-11-08 2019-11-08 Image recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110969159A true CN110969159A (en) 2020-04-07
CN110969159B CN110969159B (en) 2023-08-08

Family

ID=70030570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911087426.4A Active CN110969159B (en) 2019-11-08 2019-11-08 Image recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110969159B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753715A (en) * 2020-06-23 2020-10-09 广东小天才科技有限公司 Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN111781585A (en) * 2020-06-09 2020-10-16 浙江大华技术股份有限公司 Method for determining firework setting-off position and image acquisition equipment
CN112489120A (en) * 2021-02-04 2021-03-12 中科长光精拓智能装备(苏州)有限公司 Image recognition method and system for multi-angle image
CN111753715B (en) * 2020-06-23 2024-06-21 广东小天才科技有限公司 Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2163847A1 (en) * 2007-06-15 2010-03-17 Kabushiki Kaisha Toshiba Instrument for examining/measuring object to be measured
CN105373266A (en) * 2015-11-05 2016-03-02 上海影火智能科技有限公司 Novel binocular vision based interaction method and electronic whiteboard system
CN107545260A (en) * 2017-09-25 2018-01-05 上海电机学院 A kind of talking pen character identification system based on binocular vision
CN109598755A (en) * 2018-11-13 2019-04-09 中国科学院计算技术研究所 Harmful influence leakage detection method based on binocular vision
CN109753554A (en) * 2019-01-14 2019-05-14 广东小天才科技有限公司 A kind of searching method and private tutor's equipment based on three dimension location
CN110096993A (en) * 2019-04-28 2019-08-06 深兰科技(上海)有限公司 The object detection apparatus and method of binocular stereo vision
JP6573419B1 (en) * 2018-09-26 2019-09-11 深セン市優必選科技股▲ふん▼有限公司Ubtech Pobotics Corp Ltd Positioning method, robot and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2163847A1 (en) * 2007-06-15 2010-03-17 Kabushiki Kaisha Toshiba Instrument for examining/measuring object to be measured
CN105373266A (en) * 2015-11-05 2016-03-02 上海影火智能科技有限公司 Novel binocular vision based interaction method and electronic whiteboard system
CN107545260A (en) * 2017-09-25 2018-01-05 上海电机学院 A kind of talking pen character identification system based on binocular vision
JP6573419B1 (en) * 2018-09-26 2019-09-11 深セン市優必選科技股▲ふん▼有限公司Ubtech Pobotics Corp Ltd Positioning method, robot and computer storage medium
CN109598755A (en) * 2018-11-13 2019-04-09 中国科学院计算技术研究所 Harmful influence leakage detection method based on binocular vision
CN109753554A (en) * 2019-01-14 2019-05-14 广东小天才科技有限公司 A kind of searching method and private tutor's equipment based on three dimension location
CN110096993A (en) * 2019-04-28 2019-08-06 深兰科技(上海)有限公司 The object detection apparatus and method of binocular stereo vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RUTVI PRAJAPATI: "Design and Testing Algorithm for Real Time Text Images: Rehabilitation Aid for Blind", 《INTERNATIONAL JOURNAL OF SCIENCE TECHNOLOGY & ENGINEERING》, pages 275 - 278 *
熊邦书: "基于JSEG算法的点读机坐标定位方法", 《半导体光电》, vol. 35, no. 6, pages 1101 - 1105 *
罗庆生等著, 北京理工大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111781585A (en) * 2020-06-09 2020-10-16 浙江大华技术股份有限公司 Method for determining firework setting-off position and image acquisition equipment
CN111781585B (en) * 2020-06-09 2023-07-18 浙江大华技术股份有限公司 Method for determining firework setting-off position and image acquisition equipment
CN111753715A (en) * 2020-06-23 2020-10-09 广东小天才科技有限公司 Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN111753715B (en) * 2020-06-23 2024-06-21 广东小天才科技有限公司 Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium
CN112489120A (en) * 2021-02-04 2021-03-12 中科长光精拓智能装备(苏州)有限公司 Image recognition method and system for multi-angle image

Also Published As

Publication number Publication date
CN110969159B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
US11721073B2 (en) Synchronized, interactive augmented reality displays for multifunction devices
WO2021004247A1 (en) Method and apparatus for generating video cover and electronic device
CN109766879B (en) Character detection model generation method, character detection device, character detection equipment and medium
CN109600559B (en) Video special effect adding method and device, terminal equipment and storage medium
CN104199906A (en) Recommending method and device for shooting region
CN110930220A (en) Display method, display device, terminal equipment and medium
CN110969159B (en) Image recognition method and device and electronic equipment
CN111246196B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN112232311A (en) Face tracking method and device and electronic equipment
CN109829431B (en) Method and apparatus for generating information
CN113784046A (en) Follow-up shooting method, device, medium and electronic equipment
CN111862349A (en) Virtual brush implementation method and device and computer readable storage medium
CN112487871A (en) Handwriting data processing method and device and electronic equipment
CN112231023A (en) Information display method, device, equipment and storage medium
CN113784045B (en) Focusing interaction method, device, medium and electronic equipment
CN111462548A (en) Paragraph point reading method, device, equipment and readable medium
CN114332224A (en) Method, device and equipment for generating 3D target detection sample and storage medium
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN112991147B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112306223B (en) Information interaction method, device, equipment and medium
CN112346630B (en) State determination method, device, equipment and computer readable medium
CN112492381B (en) Information display method and device and electronic equipment
CN110390291B (en) Data processing method and device and electronic equipment
CN111429544A (en) Vehicle body color processing method and device and electronic equipment
CN117784920A (en) Data processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant