CN113542689A - Image processing method based on wireless Internet of things and related equipment - Google Patents

Image processing method based on wireless Internet of things and related equipment Download PDF

Info

Publication number
CN113542689A
CN113542689A CN202110805207.6A CN202110805207A CN113542689A CN 113542689 A CN113542689 A CN 113542689A CN 202110805207 A CN202110805207 A CN 202110805207A CN 113542689 A CN113542689 A CN 113542689A
Authority
CN
China
Prior art keywords
searched
target
image
acquiring
reference image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110805207.6A
Other languages
Chinese (zh)
Inventor
杨鹏
叶炜杰
吴义魁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinmao Smart Technology Guangzhou Co ltd
Original Assignee
Jinmao Smart Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinmao Smart Technology Guangzhou Co ltd filed Critical Jinmao Smart Technology Guangzhou Co ltd
Priority to CN202110805207.6A priority Critical patent/CN113542689A/en
Publication of CN113542689A publication Critical patent/CN113542689A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses an image processing method and related equipment based on a wireless Internet of things, which are applied to electronic equipment, wherein the electronic equipment is in the wireless Internet of things and comprises a plurality of cameras, and the method comprises the following steps: acquiring a reference image of an object to be searched; acquiring a current video clip through the plurality of cameras; searching the current video clip according to the reference image; and if a target image successfully matched with the reference image is searched, marking the corresponding position of the target image in an indoor map so as to guide a user to search the object to be searched. By adopting the embodiment of the application, the user can be helped to quickly find the needed things.

Description

Image processing method based on wireless Internet of things and related equipment
Technical Field
The application relates to the technical field of communication, in particular to an image processing method based on a wireless Internet of things and related equipment.
Background
With the development of science and technology, smart homes (smart homes) are getting closer to the lives of people, the smart homes are based on a home as a platform, and are integrated with facilities related to home life by utilizing a comprehensive wiring technology, a network communication technology, a safety precaution technology, an automatic control technology and an audio and video technology, so that an efficient management system for residential facilities and family schedule transactions is constructed, the home safety, convenience, comfort and artistry are improved, and the environment-friendly and energy-saving living environment is realized.
Disclosure of Invention
The embodiment of the application provides an image processing method based on a wireless Internet of things and related equipment, which can help a user to quickly find things needed by the user.
In a first aspect, an embodiment of the present application provides an image processing method based on a wireless internet of things, which is applied to an electronic device, where the electronic device is in the wireless internet of things, and the wireless internet of things includes a plurality of cameras, and the method includes:
acquiring a reference image of an object to be searched;
acquiring a current video clip through the plurality of cameras;
searching the current video clip according to the reference image;
and if a target image successfully matched with the reference image is searched, marking the corresponding position of the target image in an indoor map so as to guide a user to search the object to be searched.
In a second aspect, an embodiment of the present application provides an image processing apparatus based on a wireless internet of things, which is applied to an electronic device, the electronic device is in the wireless internet of things, the wireless internet of things includes a plurality of cameras, and the apparatus includes: a first acquisition unit, a second acquisition unit, a search unit and a marking unit, wherein,
the first acquisition unit is used for acquiring a reference image of an object to be searched;
the second obtaining unit is used for obtaining the current video clip through the plurality of cameras;
the searching unit is used for searching the current video clip according to the reference image;
and the marking unit is used for marking the corresponding position of the target image in the indoor map if the target image successfully matched with the reference image is searched, so as to guide a user to search the object to be searched.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the image processing method based on wireless internet of things and the related device described in the embodiments of the present application are applied to an electronic device, the electronic device is located in the wireless internet of things, the wireless internet of things includes a plurality of cameras, obtains a reference image of an object to be searched, obtains a current video clip through the plurality of cameras, searching the current video clip according to the reference image, if a target image which is successfully matched with the reference image is searched, the corresponding position of the target image is marked in the indoor map to guide the user to find the object to be found, and thus, when a user needs to search for an object, the user can acquire a video through the camera, then search the video according to the image of the object, take the position of the search result as the position of the object to be searched, the user can be guided to quickly find the object to be found, and further, the user can be helped to quickly find the object required by the user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method based on a wireless internet of things according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another image processing method based on a wireless internet of things according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another electronic device provided in an embodiment of the present application;
fig. 4 is a block diagram of functional units of an image processing apparatus based on a wireless internet of things according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device described in the embodiment of the present invention may include a smart Phone (e.g., an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a vehicle data recorder, a traffic guidance platform, a server, a notebook computer, a Mobile Internet device (MID, Mobile Internet Devices), or a wearable device (e.g., a smart watch, a bluetooth headset), which are merely examples, but are not exhaustive, and the electronic device may also be a server or a video matrix, which is not limited herein, and the electronic device may also be an Internet of things device.
In this embodiment of the application, the internet of things device may be at least one of the following: an intelligent massage chair, an intelligent lighting device, an intelligent distribution box, an intelligent router, an intelligent switch controller, an intelligent control panel, an intelligent power socket, an intelligent gateway, an intelligent coordinator, an intelligent node, an intelligent pet feeder, an intelligent set-top box, an intelligent ammeter, an intelligent humidifier, an intelligent television, an intelligent refrigerator, an intelligent washing machine, an intelligent office desk, an intelligent air conditioner, an intelligent humidifier, an intelligent range hood, an intelligent microwave oven, an intelligent water purifier, an intelligent electric rice cooker, an intelligent warmer, an intelligent door, an intelligent fan, an intelligent water dispenser, an intelligent soybean milk machine, an intelligent oven, an intelligent mahjong machine, an intelligent sofa, an intelligent household robot, an intelligent curtain, an intelligent closestool, an intelligent mobile phone, an intelligent camera, intelligent furniture, an intelligent sweeping robot, an intelligent sensor and the like, the internet of things device is not limited herein, and the internet of things device may also be any one of the electronic devices. The smart sensor may be one of: an intelligent temperature sensor, an intelligent humidity sensor, an intelligent smoke sensor, an intelligent proximity sensor, an intelligent light sensor, and the like, without limitation.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image processing method based on a wireless internet of things provided in an embodiment of the present application, and is applied to an electronic device, where the electronic device is in the wireless internet of things, and the wireless internet of things includes a plurality of cameras, as shown in the figure, the image processing method based on the wireless internet of things includes:
101. and acquiring a reference image of the object to be searched.
The object to be searched can be any object in the smart home environment or the internet of things environment, for example, the object to be searched can be at least one of the following objects: toilets, doors, fans, windows, hairpins, keys, identification cards, books, stools, cats, dogs, potatoes, refrigerators, purses, hair, and the like, without limitation. The electronic device may receive a reference image of the object to be searched sent by the user, or obtain a user voice, and obtain the reference image of the object to be searched from the image library based on the voice. The object to be searched may be one or more.
In concrete implementation, the electronic equipment can be in the wireless Internet of things, the wireless Internet of things comprises a plurality of cameras, the cameras can be located in different areas, and then the physical environment of the wireless Internet of things is monitored. The wireless internet of things can also comprise other intelligent household devices, such as an intelligent massage chair, an intelligent lighting device, an intelligent distribution box, an intelligent router, an intelligent switch controller, an intelligent control panel, an intelligent power socket, an intelligent gateway, an intelligent coordinator, an intelligent node, an intelligent pet feeder, an intelligent set top box, an intelligent electric meter, an intelligent humidifier and the like, and the limitation is not made herein. The electronic device can be a control device in the wireless internet of things and is used for controlling other devices in the wireless internet of things. The camera may be at least one of: a visible light camera, an infrared camera, a wide-angle camera, etc., which are not limited herein.
The application scenario related to the embodiment of the present application may be any indoor application scenario, for example, a home, an office building, a park, an airport, a bus stop, a train station, a hospital, a school, a museum, a tourist attraction, a mall, a supermarket, an amusement park, a vegetable market, and the like, which is not limited herein.
Optionally, the wireless internet of things further includes a microphone, and in step 101, acquiring the reference image of the object to be searched may include the following steps:
11. acquiring target voice information through the microphone;
12. analyzing the target voice information to obtain at least one keyword;
13. determining target attribute information of the object to be searched according to the at least one keyword;
14. and acquiring the reference image according to the target attribute information.
The wireless Internet of things can comprise microphones, user sounds can be collected through the microphones, and the number of the microphones can be one or more.
In specific implementation, the electronic device may obtain target voice information of the user through the microphone, and further, may analyze the target voice information to obtain at least one keyword, where the keyword may be at least one of the following: strings, words, tones, etc., without limitation. The keyword may represent some attribute information of the object to be searched, and the attribute information may be at least one of the following: name, color, size, function, purpose, material, etc., without limitation. Furthermore, the electronic device may determine the target attribute information of the object to be searched according to the at least one keyword, and may perform drawing according to the target attribute information, or perform search in an image library to obtain a reference image, where the image library may store reference images of different objects and attribute information corresponding to the reference images.
102. And acquiring the current video clip through the plurality of cameras.
The electronic device can shoot through the multiple cameras, and then the content shot by the multiple cameras is used as a current video clip, and the current video clip can comprise multiple video sub-clips.
Optionally, the wireless internet of things further includes a sensor, and the step 102 of acquiring the current video clip through the plurality of cameras may include the following steps:
a21, acquiring a plurality of detection parameters through the sensor, wherein each detection parameter is the detection parameter of an area or an object;
a22, determining a detection parameter meeting a preset requirement in the plurality of detection parameters to obtain at least one target detection parameter;
a23, determining a camera corresponding to the at least one target detection parameter, and acquiring the current video clip corresponding to the camera.
In this embodiment, the sensor may be at least one of the following: an odor sensor, a smoke sensor, a temperature sensor, a humidity sensor, a magnetic field detection sensor, a pressure sensor, etc., without limitation thereto. The different sensors may correspond to different detection parameters, for example, the detection parameters of the odor sensor may be at least one of: the type of scent, the concentration of the scent, the direction of the scent, etc., are not limited thereto. The detection parameter of the smoke sensor may be at least one of: smoke type, smoke concentration, smoke direction, etc., without limitation. The detection parameter of the temperature sensor may be at least one of: temperature value, the position that the temperature value corresponds to. The detection parameter of the humidity sensor may be at least one of: the humidity value, the location of the humidity value, etc., are not limited herein. The detection parameter of the magnetic field detection sensor may be at least one of: magnetic field strength, magnetic field range, magnetic field center position, etc., without limitation. The detection parameter of the pressure sensor may be at least one of: pressure values, locations corresponding to pressure values, etc., and are not limited herein. The preset requirement may be set by the user or default by the system, for example, the preset requirement may be that the scent type is a preset scent type.
In specific implementation, a wireless internet of things may include a sensor, a plurality of detection parameters may be obtained through the sensor, each detection parameter is a detection parameter of an area or an object, a detection parameter meeting a preset requirement among the plurality of detection parameters is determined, at least one target detection parameter is obtained, and then a camera corresponding to the at least one target detection parameter may be determined, and a current video segment corresponding to the camera is obtained, for example, when a user searches for an object in actual application, a preliminary positioning may be performed through some attributes of the object, for example, a search range is determined through odor, and then "find a list item" in the search range, so that the search efficiency can be improved.
Optionally, the wireless internet of things further includes a sensor, and the step 102 of acquiring the current video clip through the plurality of cameras may include the following steps:
b21, when the object to be searched is a close-fitting object of the user, confirming a target time period for the object to be searched to be lost;
b22, acquiring a target activity track of the user in the target time period;
and B23, determining a related camera according to the target activity track, and acquiring the current video clip through the camera.
The electronic equipment can confirm a target time period in which the object to be searched is lost, the target time period can use the time in which the object to be searched is known to be not lost in the memory of the user for the last time as a starting point, the current time is a time period of an end point, the target activity track of the user in the target time period can be obtained through the wearable equipment of the user or the motion APP of the user equipment, a related camera is determined according to the target activity track, and the current video clip is obtained through the camera.
103. And searching the current video clip according to the reference image.
In a specific implementation, the electronic device may analyze a current video segment into a frame-by-frame image, perform feature extraction on a reference image to obtain a first feature set, perform feature extraction on the frame-by-frame image to obtain a second feature set, compare the first feature set with the second feature set respectively, and take an image corresponding to the successfully-compared second feature set as a search result, or may also take an area where a successfully-matched feature in the second feature set is located as a search result, where the first feature set and the second feature set may both include a plurality of features, and the features may be at least one of: feature points, feature lines, feature vectors, and the like, which are not limited herein.
104. And if a target image successfully matched with the reference image is searched, marking the corresponding position of the target image in an indoor map so as to guide a user to search the object to be searched.
In specific implementation, if a target image successfully matched with the reference image is searched, the electronic device may mark a corresponding position of the target image in the indoor map to guide the user to search for the object to be searched, or may trigger an alarm device closest to the corresponding position to prompt the user to quickly locate the area of the corresponding position, so as to improve the search efficiency of the object to be searched.
For example, in practical applications, if a user wants to search for an object, the user may input an image of the object to be searched, acquire an image captured by the current camera, and search the image captured by the current camera according to the image of the object, so as to quickly locate the object to be searched.
Optionally, after the step 103, the method may further include the following steps:
a1, if the image content corresponding to the reference image is not searched, acquiring historical video clips of a preset time period shot by the multiple cameras;
a2, searching the historical video clip according to the reference image to obtain at least one result image, wherein each result image corresponds to a shooting moment;
a3, acquiring a target result image of the shooting time closest to the current time from the at least one result image;
a4, estimating the target searching range of the object to be searched according to the target result image;
a5, marking the target searching range in the indoor map to guide the user to search the object to be searched.
In specific implementation, sometimes an object to be searched by a user can be hidden in a place where a camera is noticed, at this time, some spider-silk traces of the object to be searched need to be searched through the historical video by means of the historical video, the nearest appearance position or appearance track of the object to be searched is further traced, and then the existence range of the object to be searched is quickly locked according to the nearest appearance position or appearance track.
Specifically, if the electronic device does not search the image content corresponding to the reference image, it may acquire a historical video segment of a preset time period captured by a plurality of cameras, then search the historical video segment according to the reference image to obtain at least one result image, where each result image corresponds to a capturing moment, then acquire a target result image of the capturing moment closest to the current time from the at least one result image, and estimate a target search range of the object to be searched according to the target result image, for example, an area within the preset range of a position where the target result image appears last time may be used as the target search range, or a motion trajectory of the object to be searched may be acquired according to the target result image, determine the target search range according to a motion trend of the motion trajectory, and further mark the target search range in the indoor map, to guide the user to find the object to be found.
For example, if the object to be searched cannot be searched by the camera, and the object to be searched may be hidden at a corner, the object to be searched needs to be searched by using the historical video segment, the appearance position of the object to be searched at the latest time is located, the search range of the object to be searched is marked based on the appearance position, and then, the corresponding area is marked in the indoor map, so as to guide the user to quickly search the object to be searched.
Optionally, after step 104, the following steps may be further included:
b1, acquiring the current position of the user;
b2, generating a navigation route between the current position and the corresponding position of the target image;
b3, displaying the navigation route on the indoor map.
In the embodiment of the application, the electronic device can acquire the current position of the user, and can also generate the navigation route between the current position and the corresponding position of the target image according to a path planning algorithm, so that the navigation route is displayed on an indoor map, and the user can quickly reach the position of the object to be searched.
It can be seen that the image processing method based on the wireless internet of things described in the embodiments of the present application is applied to an electronic device, the electronic device is located in the wireless internet of things, the wireless internet of things includes a plurality of cameras, obtains a reference image of an object to be searched, obtains a current video clip through the plurality of cameras, searching the current video clip according to the reference image, if a target image which is successfully matched with the reference image is searched, the corresponding position of the target image is marked in the indoor map to guide the user to find the object to be found, and thus, when a user needs to search for an object, the user can acquire a video through the camera, then search the video according to the image of the object, take the position of the search result as the position of the object to be searched, the user can be guided to quickly find the object to be found, and further, the user can be helped to quickly find the object required by the user.
Referring to fig. 2, fig. 2 is a schematic flow chart of an image processing method based on a wireless internet of things according to an embodiment of the present application, and is applied to an electronic device, where the electronic device is in the wireless internet of things, and the wireless internet of things includes a plurality of cameras, as shown in the figure, the image processing method based on the wireless internet of things includes:
201. and acquiring a reference image of the object to be searched.
202. And acquiring the current video clip through the plurality of cameras.
203. And searching the current video clip according to the reference image.
204. And if a target image successfully matched with the reference image is searched, marking the corresponding position of the target image in an indoor map so as to guide a user to search the object to be searched.
205. And if the image content corresponding to the reference image is not searched, acquiring historical video clips of preset time periods shot by the multiple cameras.
206. And searching the historical video clip according to the reference image to obtain at least one result image, wherein each result image corresponds to one shooting moment.
207. And acquiring a target result image of the shooting moment closest to the current time from the at least one result image.
208. And estimating the target searching range of the object to be searched according to the target result image.
209. And marking the target searching range in the indoor map so as to guide a user to search the object to be searched.
For the detailed description of the steps 201 to 209, reference may be made to corresponding steps of the image processing method based on the wireless internet of things described in fig. 1, and details are not repeated here.
It can be seen that the image processing method based on the wireless internet of things and the related device described in the embodiments of the present application are applied to an electronic device, the electronic device is located in the wireless internet of things, the wireless internet of things includes a plurality of cameras, a reference image of an object to be searched is acquired, a current video clip is acquired through the plurality of cameras, a current video clip is searched according to the reference image, if a target image successfully matched with the reference image is searched, a corresponding position of the target image is marked in an indoor map to guide a user to search for the object to be searched, if image content corresponding to the reference image is not searched, a historical video clip of a preset time period shot by the plurality of cameras is acquired, the historical video clip is searched according to the reference image to obtain at least one result image, each result image corresponds to a shooting time, a target result image of the shooting time closest to the current time is acquired from the at least one result image, estimating a target search range of the object to be searched according to the target result image, marking the target search range in the indoor map to guide the user to search for the object to be searched, and thus, when a user needs to search for an object, the user can acquire a video through the camera, then search the video according to the image of the object, and use the position of the search result as the position of the object to be searched so as to guide the user to quickly search for the object to be searched, furthermore, the user can be helped to quickly find the needed objects, and in addition, the objects needed to be found by the user can be hidden in the places where the camera is noticed sometimes, at this time, then, some spider-line trails of the object to be searched need to be searched through the historical video by means of the historical video, and the nearest appearance position or appearance track of the object to be searched is further traced, and further, and quickly locking the existence range of the object to be searched according to the nearest appearance position or appearance track.
In accordance with the foregoing embodiments, please refer to fig. 3, where fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, and is applied to the electronic device, the electronic device is in a wireless internet of things, the wireless internet of things includes multiple cameras, and the one or more programs are stored in the memory and configured to be executed by the processor, and in an embodiment of the present application, the programs include instructions for performing the following steps:
acquiring a reference image of an object to be searched;
acquiring a current video clip through the plurality of cameras;
searching the current video clip according to the reference image;
and if a target image successfully matched with the reference image is searched, marking the corresponding position of the target image in an indoor map so as to guide a user to search the object to be searched.
Optionally, the program further includes instructions for performing the following steps:
if the image content corresponding to the reference image is not searched, acquiring historical video clips of preset time periods shot by the multiple cameras;
searching the historical video clip according to the reference image to obtain at least one result image, wherein each result image corresponds to one shooting moment;
acquiring a target result image of a shooting moment closest to the current time from the at least one result image;
estimating a target searching range of the object to be searched according to the target result image;
and marking the target searching range in the indoor map so as to guide a user to search the object to be searched.
Optionally, the wireless internet of things further includes a sensor, and in the aspect of acquiring the current video clip through the plurality of cameras, the program includes instructions for executing the following steps:
acquiring a plurality of detection parameters through the sensor, wherein each detection parameter is a detection parameter of an area or an object;
determining a detection parameter meeting a preset requirement in the plurality of detection parameters to obtain at least one target detection parameter;
and determining a camera corresponding to the at least one target detection parameter, and acquiring the current video clip corresponding to the camera.
Optionally, the wireless internet of things further includes a microphone, and in the aspect of acquiring the reference image of the object to be searched, the program includes instructions for executing the following steps:
acquiring target voice information through the microphone;
analyzing the target voice information to obtain at least one keyword;
determining target attribute information of the object to be searched according to the at least one keyword;
and acquiring the reference image according to the target attribute information.
Optionally, the program further includes instructions for performing the following steps:
acquiring the current position of a user;
generating a navigation route between the current location and a corresponding location of the target image;
displaying the navigation route on the indoor map.
It can be seen that, in the electronic device described in this embodiment of the application, the electronic device is in a wireless internet of things, the wireless internet of things includes a plurality of cameras, a reference image of an object to be searched is acquired, a current video clip is acquired through the plurality of cameras, the current video clip is searched according to the reference image, if a target image successfully matched with the reference image is searched, a corresponding position of the target image is marked in an indoor map to guide a user to search for the object to be searched, so that when the user needs to search for a certain object, the video can be acquired through the cameras, then the video is searched according to the image of the object, and the position of a search result is used as the position of the object to be searched to guide the user to search for the object to be searched quickly, thereby helping the user to search for the object required by the user quickly.
Fig. 4 is a block diagram of functional units of a wireless internet of things-based image processing apparatus 400 according to an embodiment of the present application. This image processing apparatus 400 based on wireless thing networking is applied to electronic equipment, electronic equipment is in the wireless thing networking, the wireless thing networking includes a plurality of cameras, apparatus 400 includes: a first acquisition unit 401, a second acquisition unit 402, a search unit 403, and a marking unit 404, wherein,
the first obtaining unit 401 is configured to obtain a reference image of an object to be searched;
the second obtaining unit 402 is configured to obtain a current video clip through the multiple cameras;
the searching unit 403 is configured to search the current video segment according to the reference image;
the marking unit 404 is configured to mark a corresponding position of the target image in an indoor map if a target image successfully matched with the reference image is searched, so as to guide a user to search for the object to be searched.
Optionally, the apparatus 400 is further specifically configured to:
if the image content corresponding to the reference image is not searched, acquiring historical video clips of preset time periods shot by the multiple cameras;
searching the historical video clip according to the reference image to obtain at least one result image, wherein each result image corresponds to one shooting moment;
acquiring a target result image of a shooting moment closest to the current time from the at least one result image;
estimating a target searching range of the object to be searched according to the target result image;
and marking the target searching range in the indoor map so as to guide a user to search the object to be searched.
Optionally, the wireless internet of things further includes a sensor, and in the aspect of acquiring the current video clip through the plurality of cameras, the second acquiring unit 402 is specifically configured to:
acquiring a plurality of detection parameters through the sensor, wherein each detection parameter is a detection parameter of an area or an object;
determining a detection parameter meeting a preset requirement in the plurality of detection parameters to obtain at least one target detection parameter;
and determining a camera corresponding to the at least one target detection parameter, and acquiring the current video clip corresponding to the camera.
Optionally, the wireless internet of things further includes a microphone, and in the aspect of acquiring the reference image of the object to be searched, the first acquiring unit 401 is specifically configured to:
acquiring target voice information through the microphone;
analyzing the target voice information to obtain at least one keyword;
determining target attribute information of the object to be searched according to the at least one keyword;
and acquiring the reference image according to the target attribute information.
Optionally, the apparatus 400 is further specifically configured to:
acquiring the current position of a user;
generating a navigation route between the current location and a corresponding location of the target image;
displaying the navigation route on the indoor map.
It can be seen that the image processing apparatus based on wireless internet of things described in the embodiments of the present application is applied to an electronic device, the electronic device is in the wireless internet of things, the wireless internet of things includes a plurality of cameras, obtains a reference image of an object to be searched, obtains a current video clip through the plurality of cameras, searching the current video clip according to the reference image, if a target image which is successfully matched with the reference image is searched, the corresponding position of the target image is marked in the indoor map to guide the user to find the object to be found, and thus, when a user needs to search for an object, the user can acquire a video through the camera, then search the video according to the image of the object, take the position of the search result as the position of the object to be searched, the user can be guided to quickly find the object to be found, and further, the user can be helped to quickly find the object required by the user.
It can be understood that the functions of the program modules of the image processing apparatus based on the wireless internet of things according to the embodiments of the method may be specifically implemented, and the specific implementation process may refer to the relevant description of the embodiments of the method, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method based on a wireless Internet of things is applied to electronic equipment, the electronic equipment is in the wireless Internet of things, the wireless Internet of things comprises a plurality of cameras, and the method comprises the following steps:
acquiring a reference image of an object to be searched;
acquiring a current video clip through the plurality of cameras;
searching the current video clip according to the reference image;
and if a target image successfully matched with the reference image is searched, marking the corresponding position of the target image in an indoor map so as to guide a user to search the object to be searched.
2. The method of claim 1, further comprising:
if the image content corresponding to the reference image is not searched, acquiring historical video clips of preset time periods shot by the multiple cameras;
searching the historical video clip according to the reference image to obtain at least one result image, wherein each result image corresponds to one shooting moment;
acquiring a target result image of a shooting moment closest to the current time from the at least one result image;
estimating a target searching range of the object to be searched according to the target result image;
and marking the target searching range in the indoor map so as to guide a user to search the object to be searched.
3. The method according to claim 1 or 2, wherein the wireless internet of things further comprises a sensor, and the acquiring the current video clip by the plurality of cameras comprises:
acquiring a plurality of detection parameters through the sensor, wherein each detection parameter is a detection parameter of an area or an object;
determining a detection parameter meeting a preset requirement in the plurality of detection parameters to obtain at least one target detection parameter;
and determining a camera corresponding to the at least one target detection parameter, and acquiring the current video clip corresponding to the camera.
4. The method according to claim 1 or 2, wherein the wireless internet of things further comprises a microphone, and the acquiring the reference image of the object to be searched comprises:
acquiring target voice information through the microphone;
analyzing the target voice information to obtain at least one keyword;
determining target attribute information of the object to be searched according to the at least one keyword;
and acquiring the reference image according to the target attribute information.
5. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring the current position of a user;
generating a navigation route between the current location and a corresponding location of the target image;
displaying the navigation route on the indoor map.
6. The utility model provides an image processing device based on wireless thing networking which characterized in that is applied to electronic equipment, electronic equipment is in the wireless thing networking, the wireless thing networking includes a plurality of cameras, the device includes: a first acquisition unit, a second acquisition unit, a search unit and a marking unit, wherein,
the first acquisition unit is used for acquiring a reference image of an object to be searched;
the second obtaining unit is used for obtaining the current video clip through the plurality of cameras;
the searching unit is used for searching the current video clip according to the reference image;
and the marking unit is used for marking the corresponding position of the target image in the indoor map if the target image successfully matched with the reference image is searched, so as to guide a user to search the object to be searched.
7. The apparatus of claim 6, wherein the apparatus is further specifically configured to:
if the image content corresponding to the reference image is not searched, acquiring historical video clips of preset time periods shot by the multiple cameras;
searching the historical video clip according to the reference image to obtain at least one result image, wherein each result image corresponds to one shooting moment;
acquiring a target result image of a shooting moment closest to the current time from the at least one result image;
estimating a target searching range of the object to be searched according to the target result image;
and marking the target searching range in the indoor map so as to guide a user to search the object to be searched.
8. The apparatus according to claim 6 or 7, wherein the wireless internet of things further comprises a sensor, and in the aspect of acquiring the current video clip through the plurality of cameras, the second acquiring unit is specifically configured to:
acquiring a plurality of detection parameters through the sensor, wherein each detection parameter is a detection parameter of an area or an object;
determining a detection parameter meeting a preset requirement in the plurality of detection parameters to obtain at least one target detection parameter;
and determining a camera corresponding to the at least one target detection parameter, and acquiring the current video clip corresponding to the camera.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN202110805207.6A 2021-07-16 2021-07-16 Image processing method based on wireless Internet of things and related equipment Pending CN113542689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110805207.6A CN113542689A (en) 2021-07-16 2021-07-16 Image processing method based on wireless Internet of things and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110805207.6A CN113542689A (en) 2021-07-16 2021-07-16 Image processing method based on wireless Internet of things and related equipment

Publications (1)

Publication Number Publication Date
CN113542689A true CN113542689A (en) 2021-10-22

Family

ID=78128388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110805207.6A Pending CN113542689A (en) 2021-07-16 2021-07-16 Image processing method based on wireless Internet of things and related equipment

Country Status (1)

Country Link
CN (1) CN113542689A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017365A (en) * 2022-05-23 2022-09-06 北京声智科技有限公司 Article searching method, device, server, terminal and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222774A (en) * 2015-10-22 2016-01-06 广东欧珀移动通信有限公司 A kind of indoor orientation method and user terminal
CN106358152A (en) * 2016-10-19 2017-01-25 珠海市魅族科技有限公司 Positioning method and positioning apparatus
CN108650245A (en) * 2018-04-24 2018-10-12 上海奥孛睿斯科技有限公司 Internet of things system based on augmented reality and operation method
CN109614897A (en) * 2018-11-29 2019-04-12 平安科技(深圳)有限公司 A kind of method and terminal of interior lookup article
CN110213723A (en) * 2019-05-16 2019-09-06 武汉数矿科技股份有限公司 A kind of method and apparatus quickly determining suspect according to track
US20200070348A1 (en) * 2017-05-11 2020-03-05 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Article searching method and robot thereof
CN111225340A (en) * 2019-11-26 2020-06-02 恒大智慧科技有限公司 Scenic spot object searching method and device and storage medium
CN111401325A (en) * 2020-04-21 2020-07-10 英华达(上海)科技有限公司 System and method for quickly searching for articles
CN112148924A (en) * 2019-06-28 2020-12-29 杭州海康威视数字技术股份有限公司 Luggage case retrieval method and device and electronic equipment
CN112380391A (en) * 2020-10-13 2021-02-19 特斯联科技集团有限公司 Video processing method and device based on Internet of things, electronic equipment and storage medium
CN112566037A (en) * 2020-11-19 2021-03-26 努比亚技术有限公司 Article tracking method, terminal and computer-readable storage medium
CN112637259A (en) * 2020-10-10 2021-04-09 峥峰(南京)物联信息技术有限公司 Intelligent monitoring method based on Internet of things

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222774A (en) * 2015-10-22 2016-01-06 广东欧珀移动通信有限公司 A kind of indoor orientation method and user terminal
CN106358152A (en) * 2016-10-19 2017-01-25 珠海市魅族科技有限公司 Positioning method and positioning apparatus
US20200070348A1 (en) * 2017-05-11 2020-03-05 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Article searching method and robot thereof
CN108650245A (en) * 2018-04-24 2018-10-12 上海奥孛睿斯科技有限公司 Internet of things system based on augmented reality and operation method
CN109614897A (en) * 2018-11-29 2019-04-12 平安科技(深圳)有限公司 A kind of method and terminal of interior lookup article
CN110213723A (en) * 2019-05-16 2019-09-06 武汉数矿科技股份有限公司 A kind of method and apparatus quickly determining suspect according to track
CN112148924A (en) * 2019-06-28 2020-12-29 杭州海康威视数字技术股份有限公司 Luggage case retrieval method and device and electronic equipment
CN111225340A (en) * 2019-11-26 2020-06-02 恒大智慧科技有限公司 Scenic spot object searching method and device and storage medium
CN111401325A (en) * 2020-04-21 2020-07-10 英华达(上海)科技有限公司 System and method for quickly searching for articles
CN112637259A (en) * 2020-10-10 2021-04-09 峥峰(南京)物联信息技术有限公司 Intelligent monitoring method based on Internet of things
CN112380391A (en) * 2020-10-13 2021-02-19 特斯联科技集团有限公司 Video processing method and device based on Internet of things, electronic equipment and storage medium
CN112566037A (en) * 2020-11-19 2021-03-26 努比亚技术有限公司 Article tracking method, terminal and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115017365A (en) * 2022-05-23 2022-09-06 北京声智科技有限公司 Article searching method, device, server, terminal and storage medium

Similar Documents

Publication Publication Date Title
US11062580B2 (en) Methods and systems for updating an event timeline with event indicators
US10977918B2 (en) Method and system for generating a smart time-lapse video clip
US20210125475A1 (en) Methods and devices for presenting video information
US9489580B2 (en) Method and system for cluster-based video monitoring and event categorization
CN117496643A (en) System and method for detecting and responding to visitor of smart home environment
CN110546627B (en) Video integration with home assistant
CN113542689A (en) Image processing method based on wireless Internet of things and related equipment
CN117541913A (en) Digital twinning-based deployment scene generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211022