CN113709364A - Object identifying camera equipment and object identifying method - Google Patents

Object identifying camera equipment and object identifying method Download PDF

Info

Publication number
CN113709364A
CN113709364A CN202110959641.XA CN202110959641A CN113709364A CN 113709364 A CN113709364 A CN 113709364A CN 202110959641 A CN202110959641 A CN 202110959641A CN 113709364 A CN113709364 A CN 113709364A
Authority
CN
China
Prior art keywords
voice
module
image
control module
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110959641.XA
Other languages
Chinese (zh)
Other versions
CN113709364B (en
Inventor
黄一清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Shanghai Intelligent Technology Co Ltd
Original Assignee
Unisound Shanghai Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Shanghai Intelligent Technology Co Ltd filed Critical Unisound Shanghai Intelligent Technology Co Ltd
Priority to CN202110959641.XA priority Critical patent/CN113709364B/en
Publication of CN113709364A publication Critical patent/CN113709364A/en
Application granted granted Critical
Publication of CN113709364B publication Critical patent/CN113709364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement

Abstract

The invention discloses an object identifying camera device and an object identifying method, wherein the device comprises: the touch screen camera comprises a camera body, a microphone, a voice player and a controller, wherein the controller, the microphone and the voice player are respectively installed on the camera body; and the server comprises a control module, a storage module, an image recognition module, a matching module, an intention analysis module and a voice synthesis module which are connected with the controller through wireless signals, and the image recognition module, the matching module and the storage module are respectively connected with the control module. The invention solves the problem that the solution can not be timely and accurately obtained in the process of the children's cognition on new things.

Description

Object identifying camera equipment and object identifying method
Technical Field
The invention relates to the technical field of intelligent electrical appliances, in particular to an object identifying camera device and an object identifying method.
Background
In the process of recognizing the new things by the children, parents cannot accompany the new things all the time or the parents cannot understand the new things, so that the parents cannot answer the new things timely and accurately. As more and more questions are not solved in time and even forgotten over time, the speed of the children for recognizing the objects cannot be effectively improved.
Disclosure of Invention
In order to overcome the defects of the prior art, an object identifying camera device and an object identifying method are provided so as to solve the problem that a solution cannot be timely and accurately obtained in the process of identifying a new object by a child.
In order to achieve the above object, there is provided an object recognition camera apparatus including:
the touch screen camera comprises a camera body, a microphone, a voice player and a controller, wherein the camera body is used for acquiring images of new things and playing images and videos; and
the server comprises a control module, a storage module, an image recognition module, a matching module, an intention analysis module and a voice synthesis module, wherein the control module is connected with the controller through wireless signals, the storage module is used for storing an image library and a plurality of information libraries, the image recognition module is used for recognizing the intention objects marked in the images of the new objects and generating intention object images, the matching module is used for matching the intention object images to the images of the objects in the image library to determine the information libraries of the intention objects, the image recognition module, the matching module, the intention analysis module and the storage module are respectively connected with the control module, the image library comprises a plurality of images of the objects, each information library comprises introduction information of the objects, and the images of the objects are correspondingly connected with one information library.
Further, the server further comprises a video synthesis module, and the video synthesis module is connected to the control module.
Further, the server is a cloud server.
Further, the intention analyzing module comprises a voice recognition unit and a semantic understanding unit, wherein the voice recognition unit is connected to the control module, and the semantic understanding unit is connected to the voice recognition unit and the control module.
The invention provides an object recognizing method of object recognizing camera equipment, which comprises the following steps:
the camera body collects new object images;
the microphone collects inquiry voice;
marking an attention-seeking object in the new object image through a touch screen of the camera body;
the controller simultaneously transmits the new object image marked with the intention object and the inquiry voice to the outside;
constructing an image library and a plurality of information libraries in a storage module, wherein the image library comprises images of a plurality of objects, each information library comprises introduction information of the objects, and the image of each object is correspondingly connected with one information library;
the control module receives the new object image and the inquiry voice at the same time;
the image identification module identifies the intention object marked in the new object image and generates an intention object image;
the matching module matches the image of the intended object to the image of the thing in the image library to determine an information library of the intended object;
the intention analysis module analyzes the query voice to obtain the semantic meaning of the query voice;
in the determined information base of the intention object, the control module extracts answer information related to the semantics;
the voice synthesis module synthesizes the answer information into answer voice;
the control module sends the answer voice to the outside;
the controller receiving the answer speech;
the voice player outputs the answer voice.
Further, the method also comprises the following steps:
after receiving the new object image and the query voice at the same time, storing the new object image in a classification mode in the storage module;
after receiving the new thing image and the query voice simultaneously a plurality of times, the microphone collects a review voice;
the controller sends the review voice to the outside;
the control module receives the review voice;
the intention analysis module analyzes the review voice to obtain the semantics of the review voice;
in the storage module, the control module extracts the classified and stored new object images matched with the semantics of the review voice based on the semantics of the review voice and sends the new object images to the outside;
the controller receives the new object image sent by the control module;
the camera body plays the new object image sent by the control module.
Further, the method also comprises the following steps:
after receiving the new object image and the query voice simultaneously a plurality of times, the microphone collects a recall voice;
the controller sends the recall voice to the outside;
the control module receives the recall voice;
the intention analysis module analyzes the recall voice to obtain the semantics of the recall voice;
in the storage module, the control module extracts the classified and stored new object images matched with the semantics of the recall voice;
the video synthesis module synthesizes the extracted new thing images into video information;
the control module sends the video information to the outside;
the controller receives the video information;
the camera body plays the video information.
Further, the step of the intention parsing module parsing the query voice to obtain the semantics of the query voice comprises:
a voice recognition unit recognizes the query voice to obtain a text of the query voice;
a semantic understanding unit understands text of the query speech to obtain semantics of the query speech.
The invention has the advantages that the object identifying camera device of the invention endows the common touch screen camera with the function of identifying objects by circle pictures based on the artificial intelligence capabilities of image recognition, voice recognition, semantic understanding, voice synthesis and the like, and helps children and the like to know and learn new objects more quickly and efficiently.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a schematic block diagram of an object recognition camera device according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, the present invention provides an object recognition camera device, including: a touch screen camera 1 and a server 2.
The touch screen camera 1 is intended to be carried around by a user. In this embodiment, the user includes, but is not limited to, a child. The user sends an instruction to the server through the touch screen camera, and the server provides powerful storage and calculation processing services for the user based on the instruction.
Specifically, the touch screen camera 1 includes a camera body 11, a microphone 12, a voice player 13, and a controller 14.
The camera body 11 has a touch screen, and a user inputs various instructions to the camera body through the touch screen to control the camera body to take pictures and process the pictures. The camera body is used for collecting images of new things and playing images and videos. The camera body can shoot photos and videos and store the photos and the videos in the memory card in the camera body, and images and videos stored in the memory card are checked through a review instruction.
The microphone 12, the voice player 13, and the controller 14 are integrally mounted inside the camera body 11. The camera body 11, the microphone 12, and the voice player 13 are connected to a controller 14, respectively.
In this embodiment, the user inputs an instruction to the controller through the touch screen, reviews the image stored in the camera body, and marks the image through the image processing software. Specifically, the user may use the screen brush to circle out an object to be queried (i.e., an intended object, which the user intends to know) on the captured image, and the controller sends the object to the server for corresponding processing.
Specifically, the server 2 includes a control module 22, a storage module 21, an image recognition module 23 and a matching module 24, an intention analyzing module 25, and a speech synthesis module 26.
The storage module 21 is constructed in advance and stores an image library and a plurality of information libraries. The image library includes images of a plurality of things. Each library includes information (text information) about the introduction of an object. The image of each object is correspondingly connected with an information base, namely the image and the information base corresponding to the object in the image are mutually bound to form a mapping relation. After a certain image in the image library is determined, the name of the object corresponding to the image and the related introduction information of the object corresponding to the image can be obtained. For example, if an image in the image library is "chrysanthemum", the corresponding information library is "chrysanthemum", and the information in the information library includes but is not limited to: introduction of chrysanthemum, origin of chrysanthemum, ancient poetry related to chrysanthemum, classic related to chrysanthemum, and the like.
Meanwhile, the storage module is used for storing the image data and storing the image data based on the corresponding classification type. Specifically, the classification type is classified by animal, plant, microorganism, etc., and is not limited herein.
The control module 22 is in wireless signal communication with the controller 14. Specifically, the controller is connected with a wireless communication module, the control module is connected with another wireless communication module, and the wireless communication module of the controller is in wireless signal connection with the wireless communication module of the control module and performs wireless signal transmission to transmit information such as images, voice and video.
The image recognition module 23 is configured to recognize an intended object marked in the new object image and generate an intended object image.
The matching module 24 is used to match the image of the intended object to the image of a thing in the image library to determine the information library of the intended object.
The intention analyzing module 25 analyzes the semantics of various voice commands sent externally by the controller through voice recognition. After the intention analysis module carries out voice recognition and analysis on the semantics of the voice command, the control module carries out corresponding processing actions based on the semantics of various voice commands. Specifically, the intention analyzing module 25 includes a voice recognition unit connected to the control module and a semantic understanding unit connected to the voice recognition unit. The semantic understanding unit is connected with the voice recognition unit and the control module.
The speech synthesis module 26 is used to synthesize the information output by the control module into answer speech. The speech synthesis module 26 converts the text information (information output by the control module) into audio (answer speech). After the answer voice is synthesized, the control module sends the answer voice to the controller, and the controller plays the answer voice through the voice player to inform a user.
For clearly explaining the working principle of the object-identifying camera device of the invention, the intended object is taken as a chrysanthemum as an example for explanation:
the user uses the camera body to take a picture containing an image of an intended object (watermelon), circles out the intended object (watermelon) in the picture on a touch screen of the camera body, and clicks and records the inquiry voice through the touch screen. After the microphone collects the inquiry voice of the user, an uploading instruction on the touch screen is clicked, and the inquiry voice and the picture of the object which is circled with the intention are simultaneously sent to the control module by the controller.
After receiving the query voice and the photo at the same time, the control module sends the query voice to the intention analyzing module and sends the photo to the image recognition module. The intention analysis module carries out voice recognition on the query voice and analyzes the semantic meaning of the query voice and sends the semantic meaning to the controller. On the other hand, the image recognition module performs image recognition processing on the photo to generate a picture of an intention image and sends the picture to the control module. After the control module obtains the semantics of the query voice and the picture of the intention image, the intention image is matched with the image of the watermelon in the value image library, and the watermelon information library is determined. Based on the semantic meaning of the query voice, the control module extracts the watermelon information matched with the semantic meaning of the query voice and sends the watermelon information to the voice synthesis module. For example, if the semantic of the query voice is "variety of watermelon", the control module extracts introduction information (text information) of the variety of watermelon from the watermelon information library. The voice synthesis module synthesizes the introduction information of the watermelon variety sent by the control module into answer voice and sends the answer voice to the control module. The control module receives the answer voice and sends the answer voice to the outside.
The controller receives the answer voice of the control module and sends the answer voice to the voice player, and the voice player plays the answer voice for the user to learn by reference.
The object-identifying camera device of the invention endows the common touch screen camera with the function of identifying objects by circle images based on the artificial intelligence capabilities of image recognition, voice recognition, semantic understanding, voice synthesis and the like, and helps children and the like to know and learn new objects more quickly and efficiently.
In a preferred embodiment, the server further comprises a video composition module 27. The video composition module 27 is connected to the control module 22. After the controller sends the new object image to the control module, after the image recognition module recognizes the new object image, the control module stores the new object image in the storage module and classifies and stores the new object image based on a certain classification type. After the user clicks and watches the video (such as new things that I know in spring 2021) through the touch screen, the controller generates a video synthesis instruction and sends the video synthesis instruction to the outside. After the control module receives the video synthesis instruction, the control module extracts a plurality of new object images in the storage module according to time or type. And the video synthesis module synthesizes the extracted multiple new object images into video information. The control module sends the video information to the controller again, and the controller plays the video information through the camera body.
After the user clicks and views the picture (such as viewing the flower that I have seen) through the touch screen, the controller generates a picture collection instruction and sends the picture collection instruction to the outside. After the control module receives the picture collecting instruction, the control module extracts a plurality of new object images from the storage module. The control module sends the extracted multiple new object images to the controller, and the controller plays the extracted multiple new object images through the camera body.
In addition, the user can share, transfer and the like the shot new object images through the server.
In this embodiment, the server is a cloud server.
With continued reference to fig. 1, the present invention provides an object recognizing method of an object recognizing camera device, comprising the steps of:
s1: the camera body 11 collects new object images.
S2: the microphone 12 collects the query voice.
S3: the object is marked in the image of the new object through the touch screen of the camera body 11.
S4: the controller 14 simultaneously transmits the image of the new object labeling the intended object and the inquiry voice to the outside.
S5: an image library and a plurality of information libraries are constructed in the storage module 21, the image library includes images of a plurality of objects, each information library includes introduction information of an object, and the image of each object is correspondingly connected to one information library.
S6: the control module 22 receives both the image of the new thing and the voice of the inquiry.
S7: the image recognition module 23 recognizes the intended object marked in the new object image and generates an intended object image.
S8: the matching module 24 matches the image of the intended object to an image of a thing in the image library to determine an information library of the intended object.
S9: the intention analyzing module 25 recognizes and analyzes the query voice by voice to obtain the semantics of the query voice;
s10: in the information base of the determined intended object, the control module 22 extracts the response information related to the semantics.
S11: the speech synthesis module 26 synthesizes the answer information into answer speech.
S12: the control module 22 sends the answer voice to the outside.
S13: the controller 14 receives the answer voice.
S14: the voice player 13 outputs the answer voice.
Wherein, step S9 includes:
a voice recognition unit recognizes the query voice to obtain a text of the query voice;
a semantic understanding unit understands text of the query speech to obtain semantics of the query speech.
As a preferred embodiment, the object recognizing method of the object recognizing camera device of the present invention further comprises the following steps:
a. after the control module receives the new object image and the query voice at the same time, the storage module 21 stores the new object image in a classified manner.
b. After receiving the image of the new thing and the inquiry voice at the same time a plurality of times, the user transmits a review voice, and the microphone 12 collects the review voice.
c. The controller 14 sends a review voice to the outside.
d. The control module 22 receives review speech;
e. the intention analysis module 25 identifies and analyzes the review voice through voice to obtain the semantics of the review voice;
f. in the storage module 21, the control module 22 extracts the classified and stored new object images matched with the semantics of the review voice based on the semantics of the review voice and sends the images to the outside;
g. the controller 14 receives the image of the new thing sent by the control module 22;
h. the camera body 11 plays the new object image transmitted by the control module 22.
Wherein, step e includes:
a voice recognition unit recognizes the review voice to obtain a text of the review voice;
a semantic understanding unit understands the text of the review speech to obtain semantics of the review speech.
As a preferred embodiment, the object recognizing method of the object recognizing camera device of the present invention further comprises the following steps:
A. after receiving the image of the new object and the inquiry voice at the same time a plurality of times, the user utters a recall voice, and the microphone 12 collects the recall voice.
B. The controller 14 sends a recall voice to the outside.
C. The control module 22 receives the recall voice.
D. The intention parsing module 25 recognizes and parses the recall voice through voice to obtain the semantics of the recall voice.
E. In the storage module 21, the control module 22 extracts the classified and stored new object images matched with the semantic meaning of the recall voice.
F. The video composition module 27 composes the extracted new thing image into video information.
G. The control module 22 sends out the video information.
H. The controller 14 receives video information.
I. The camera body 11 plays video information.
Wherein, step D includes:
a voice recognition unit recognizes the recall voice to obtain a text of the recall voice;
the semantic understanding unit understands the text of the recall voice to obtain the semantics of the recall voice.
In the object recognizing method of the object recognizing camera device of the invention, the image recognition: identifying the circled area; default is full screen selection; speech recognition, semantic understanding, speech synthesis: for voice interaction; an image library: the large model is identified based on the intelligent image, so that the picture content can be identified and converted into the corresponding object name; the information base contains a large number of intelligent question-answer pairs, and can intelligently match questions and give corresponding answers.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (8)

1. An object recognition camera device, comprising:
the touch screen camera comprises a camera body, a microphone, a voice player and a controller, wherein the camera body is used for acquiring images of new things and playing images and videos; and
the server comprises a control module, a storage module, an image recognition module, a matching module, an intention analysis module and a voice synthesis module, wherein the control module is connected with the controller through wireless signals, the storage module is used for storing an image library and a plurality of information libraries, the image recognition module is used for recognizing the intention objects marked in the images of the new objects and generating intention object images, the matching module is used for matching the intention object images to the images of the objects in the image library to determine the information libraries of the intention objects, the image recognition module, the matching module, the intention analysis module and the storage module are respectively connected with the control module, the image library comprises a plurality of images of the objects, each information library comprises introduction information of the objects, and the images of the objects are correspondingly connected with one information library.
2. The object identifying camera device of claim 1, wherein the server further comprises a video compositing module connected to the control module.
3. The object identifying camera device of claim 1, wherein the server is a cloud server.
4. The object identifying camera device as claimed in claim 1, wherein the intention analyzing module comprises a voice recognition unit and a semantic understanding unit, the voice recognition unit is connected to the control module, and the semantic understanding unit is connected to the voice recognition unit and the control module.
5. An object recognizing method of an object recognizing camera device as claimed in any one of claims 1 to 4, comprising the steps of:
the camera body collects new object images;
the microphone collects inquiry voice;
marking an attention-seeking object in the new object image through a touch screen of the camera body;
the controller simultaneously transmits the new object image marked with the intention object and the inquiry voice to the outside;
constructing an image library and a plurality of information libraries in a storage module, wherein the image library comprises images of a plurality of objects, each information library comprises introduction information of the objects, and the image of each object is correspondingly connected with one information library;
the control module receives the new object image and the inquiry voice at the same time;
the image identification module identifies the intention object marked in the new object image and generates an intention object image;
the matching module matches the image of the intended object to the image of the thing in the image library to determine an information library of the intended object;
the intention analysis module analyzes the query voice to obtain the semantic meaning of the query voice;
in the determined information base of the intention object, the control module extracts answer information related to the semantics;
the voice synthesis module synthesizes the answer information into answer voice;
the control module sends the answer voice to the outside;
the controller receiving the answer speech;
the voice player outputs the answer voice.
6. The method for recognizing an object according to claim 5, further comprising:
after receiving the new object image and the query voice at the same time, storing the new object image in a classification mode in the storage module;
after receiving the new thing image and the query voice simultaneously a plurality of times, the microphone collects a review voice;
the controller sends the review voice to the outside;
the control module receives the review voice;
the intention analysis module analyzes the review voice to obtain the semantics of the review voice;
in the storage module, the control module extracts the classified and stored new object images matched with the semantics of the review voice based on the semantics of the review voice and sends the new object images to the outside;
the controller receives the new object image sent by the control module;
the camera body plays the new object image sent by the control module.
7. The method for recognizing an object according to claim 5, further comprising:
after receiving the new object image and the query voice simultaneously a plurality of times, the microphone collects a recall voice;
the controller sends the recall voice to the outside;
the control module receives the recall voice;
the intention analysis module analyzes the recall voice to obtain the semantics of the recall voice;
in the storage module, the control module extracts the classified and stored new object images matched with the semantics of the recall voice;
the video synthesis module synthesizes the extracted new thing images into video information;
the control module sends the video information to the outside;
the controller receives the video information;
the camera body plays the video information.
8. The method for identifying an object as claimed in claim 5, wherein the step of the intent parsing module parsing the query voice to obtain the semantic meaning of the query voice comprises:
a voice recognition unit recognizes the query voice to obtain a text of the query voice;
a semantic understanding unit understands text of the query speech to obtain semantics of the query speech.
CN202110959641.XA 2021-08-20 2021-08-20 Camera identifying equipment and object identifying method Active CN113709364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110959641.XA CN113709364B (en) 2021-08-20 2021-08-20 Camera identifying equipment and object identifying method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110959641.XA CN113709364B (en) 2021-08-20 2021-08-20 Camera identifying equipment and object identifying method

Publications (2)

Publication Number Publication Date
CN113709364A true CN113709364A (en) 2021-11-26
CN113709364B CN113709364B (en) 2024-02-09

Family

ID=78654042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110959641.XA Active CN113709364B (en) 2021-08-20 2021-08-20 Camera identifying equipment and object identifying method

Country Status (1)

Country Link
CN (1) CN113709364B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114513604A (en) * 2022-01-12 2022-05-17 中瑞云软件(深圳)有限公司 Practical intelligent camera for children

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092507A (en) * 2011-11-08 2013-05-08 三星电子株式会社 Apparatus and method for representing an image in a portable terminal
CN112182252A (en) * 2020-11-09 2021-01-05 浙江大学 Intelligent medication question-answering method and device based on medicine knowledge graph

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092507A (en) * 2011-11-08 2013-05-08 三星电子株式会社 Apparatus and method for representing an image in a portable terminal
CN112182252A (en) * 2020-11-09 2021-01-05 浙江大学 Intelligent medication question-answering method and device based on medicine knowledge graph

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114513604A (en) * 2022-01-12 2022-05-17 中瑞云软件(深圳)有限公司 Practical intelligent camera for children

Also Published As

Publication number Publication date
CN113709364B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
US20220239988A1 (en) Display method and apparatus for item information, device, and computer-readable storage medium
US11151406B2 (en) Method, apparatus, device and readable storage medium for image-based data processing
JP6713034B2 (en) Smart TV audio interactive feedback method, system and computer program
Perez et al. Interaction relational network for mutual action recognition
US20190188903A1 (en) Method and apparatus for providing virtual companion to a user
CN105126355A (en) Child companion robot and child companioning system
CN107798932A (en) A kind of early education training system based on AR technologies
TW201117114A (en) System, apparatus and method for message simulation
CN113835522A (en) Sign language video generation, translation and customer service method, device and readable medium
CN113392687A (en) Video title generation method and device, computer equipment and storage medium
CN113709364B (en) Camera identifying equipment and object identifying method
JP2015104078A (en) Imaging apparatus, imaging system, server, imaging method and imaging program
CN108108412A (en) Children cognition study interactive system and method based on AI open platforms
WO2021089059A1 (en) Method and apparatus for smart object recognition, object recognition device, terminal device, and storage medium
CN111800650B (en) Video dubbing method and device, electronic equipment and computer readable medium
CN111402640A (en) Children education robot and learning material pushing method thereof
CN114443938A (en) Multimedia information processing method and device, storage medium and processor
JP7130290B2 (en) information extractor
JP4649944B2 (en) Moving image processing apparatus, moving image processing method, and program
CN115209233A (en) Video playing method and related device and equipment
CN114550183A (en) Electronic equipment and error recording method
CN113536009A (en) Data description method and device, computer readable medium and electronic device
CN110163043A (en) Type of face detection method, device, storage medium and electronic device
CN111259182A (en) Method and device for searching screen shot image
Teixeira et al. Silent speech interaction for ambient assisted living scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant