CN108664912B - Information processing method and device, computer storage medium and terminal - Google Patents

Information processing method and device, computer storage medium and terminal Download PDF

Info

Publication number
CN108664912B
CN108664912B CN201810419877.2A CN201810419877A CN108664912B CN 108664912 B CN108664912 B CN 108664912B CN 201810419877 A CN201810419877 A CN 201810419877A CN 108664912 B CN108664912 B CN 108664912B
Authority
CN
China
Prior art keywords
activity
target object
image
threshold
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810419877.2A
Other languages
Chinese (zh)
Other versions
CN108664912A (en
Inventor
张贵川
徐浩
吴明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co ltd
Beijing Minglue Zhaohui Technology Co Ltd
Original Assignee
Beijing Supertool Internet Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Supertool Internet Technology Ltd filed Critical Beijing Supertool Internet Technology Ltd
Priority to CN201810419877.2A priority Critical patent/CN108664912B/en
Publication of CN108664912A publication Critical patent/CN108664912A/en
Application granted granted Critical
Publication of CN108664912B publication Critical patent/CN108664912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • G06V20/47Detecting features for summarising video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

An information processing method, an information processing device, a computer storage medium and a terminal comprise: acquiring activity information of a target object in a video image; and carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object. According to the embodiment of the invention, the activity information of the target object is acquired from the video image, so that the efficiency of analyzing the activity state of the target object is improved.

Description

Information processing method and device, computer storage medium and terminal
Technical Field
The present disclosure relates to, but not limited to, image processing technologies, and in particular, to an information processing method, an information processing apparatus, a computer storage medium, and a terminal.
Background
The analysis of the activity state of the animals is an important basis for analyzing and managing the animals; the method comprises the following steps: expelling or killing certain animals in some areas based on the analysis of the animal's activity status; and (3) formulating and adjusting a breeding plan of the animals based on the activity state analysis of the animals.
In the catering industry, technicians check and control the activity of the rats after acquiring video data through a preset image acquisition device; for example, recorded video data is obtained in real time through a monitoring camera in a kitchen of a restaurant, health detection personnel of related departments manually check video contents according to a certain period, and whether rodent activities exist is determined by combining the scene of the kitchen; when the mouse movement which easily causes the sanitation hidden trouble is determined, the mouse is cleared by arranging the trap.
Because the mice are small and have secret activities, omission easily occurs by a method of manually checking video data and combining site investigation, and particularly, the omission is more easily caused when the mice with small volume are used; in addition, the method for manually checking the video data wastes time and labor in the checking process due to the redundancy of the video data. Taking the kitchen as an example: in areas where a large amount of food materials such as five cereals, meat products, vegetables and the like are stored, rats often do not work in the kitchen, and do activities such as crawling, sniffing, gnawing and the like on kitchen ware, tableware and food materials, so that sanitation and safety problems such as damage to kitchen appliances, pollution to food materials and the like are caused. According to a certain period, the recorded video data is checked by workers, and by combining with on-site investigation, partial mouse activity areas are likely to be determined, but the mouse activity areas are often determined not timely enough and a large amount of manpower is consumed, the problem that the mouse activity areas are missed by the workers who check the video data for a long time is easily caused, the efficiency of the mice in a long-term clear control process is low, and the food safety problem cannot be effectively controlled. The action of the mouse cannot be effectively controlled for a long time, thereby causing more serious food safety accidents.
In conclusion, the analysis of the activity state of the mouse cannot be accurately and efficiently realized by the staff in a manual mode, the control of the mouse activity is influenced, and the food sanitation and safety are influenced.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
Embodiments of the present invention provide an information processing method and apparatus, a computer storage medium, and a terminal, which can improve the efficiency of analyzing the activity state of a target object.
An embodiment of the present invention provides an information processing method, including:
acquiring activity information of a target object in a video image;
and carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object.
Optionally, the acquiring the activity information of the target object in the video image includes:
determining whether each frame of image contained in the video image contains the target object;
for each frame including the image of the target object, marking the target object according to a preset image identifier;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identifier.
Optionally, before the obtaining of the activity information of the target object in the video image, the information processing method includes:
determining whether the video image is received or not according to a preset monitoring period;
wherein the video image comprises: when the video data acquired by the preset image acquisition device is longer than a first preset time length, the acquired video data is segmented to obtain a video clip with the first preset time length.
Optionally, the statistically analyzing the activity state of the target object includes:
setting one or more groups of activity state reference parameters;
according to the set activity state reference parameters of each group, carrying out statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
wherein each set of said activity state reference parameters comprises the corresponding: an activity duration threshold and/or an activity area size threshold.
Optionally, before the obtaining of the activity information of the target object in the video image, the information processing method further includes:
within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image;
when the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and performing coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
Optionally, the determining whether each frame of image contains the target object includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold, analyzing and confirming whether the subsequent adjacent image contains the target object; when the percentage of the pixel difference between the initial image and the subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent images as the previous images according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is greater than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sequencing order so as to continuously confirm whether the target object is contained.
Optionally, the information processing method further includes:
and respectively counting the activity states corresponding to the reference parameters of the activity states to obtain the activity frequency corresponding to each activity state.
Optionally, the target object includes a rodent, and the obtaining of the activity state of the target object includes:
according to the acquired activity information, the activity duration of the rodents in the activity areas of all sizes is statistically analyzed;
and determining the activity state of the mouse in the activity areas of all sizes according to the statistical analysis result of the activity duration of the mouse in the activity areas of all sizes.
On the other hand, an embodiment of the present invention further provides an information processing apparatus, including: an acquisition unit and a statistical analysis unit; wherein the content of the first and second substances,
the acquisition unit is used for: acquiring activity information of a target object in a video image;
the statistical analysis unit is used for: and statistically analyzing the acquired activity information to obtain the activity state of the target object.
Optionally, the obtaining unit is specifically configured to:
determining whether each frame of image contained in the video image contains the target object;
for each frame including the image of the target object, marking the target object according to a preset image identifier;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identifier.
Optionally, the information processing apparatus further includes a monitoring unit, configured to: determining whether the video image is received or not according to a preset monitoring period;
wherein the video image comprises: when the video data acquired by the preset image acquisition device is longer than a first preset time length, the acquired video data is segmented to obtain a video clip with the first preset time length.
Optionally, the statistical analysis unit is specifically configured to:
setting one or more groups of activity state reference parameters;
according to the set activity state reference parameters of each group, carrying out statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
wherein each set of said activity state reference parameters comprises the corresponding: an activity duration threshold and/or an activity area size threshold.
Optionally, the information processing apparatus further includes an initial image determining unit configured to:
within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image;
when the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and performing coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
Optionally, the obtaining unit is configured to determine whether each frame of image contains the target object, and includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold, analyzing and confirming whether the subsequent adjacent image contains the target object; when the pixel difference percentage between the initial image and a subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent image as a previous image according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is larger than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sequencing order so as to continuously confirm whether the target object is contained.
Optionally, the statistical analysis unit is further configured to:
and respectively counting the activity states corresponding to the reference parameters of the activity states to obtain the activity frequency corresponding to each activity state.
Optionally, the target object includes a mouse, and the statistical analysis unit is specifically configured to:
according to the acquired activity information, the activity duration of the rodents in the activity areas of all sizes is statistically analyzed;
and determining the activity state of the mouse in the activity areas of all sizes according to the statistical analysis result of the activity duration of the mouse in the activity areas of all sizes.
In another aspect, an embodiment of the present invention further provides a computer storage medium, where computer-executable instructions are stored in the computer storage medium, and the computer-executable instructions are used to execute the information processing method.
In another aspect, an embodiment of the present invention further provides a terminal, including: a memory and a processor; wherein, the first and the second end of the pipe are connected with each other,
the processor is configured to execute program instructions in the memory;
the program instructions read on the processor to perform the following operations:
acquiring activity information of a target object in a video image;
and carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object.
Compared with the related art, the technical scheme of the application comprises the following steps: acquiring activity information of a target object in a video image; and statistically analyzing the acquired activity information to obtain the activity state of the target object. According to the embodiment of the invention, the activity information of the target object is acquired from the video image, so that the efficiency of analyzing the activity state of the target object is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart of an information processing method according to an embodiment of the present invention;
fig. 2 is a block diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a flowchart of an information processing method according to an embodiment of the present invention, as shown in fig. 1, including:
step 101, acquiring activity information of a target object in a video image;
optionally, the acquiring of the activity information of the target object in the video image according to the embodiment of the present invention includes:
determining whether each frame of image contained in the video image contains the target object;
marking the target object according to a preset image identifier for the image containing the target object in each frame;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identifier.
It should be noted that, the method for identifying the target object may be performed by analyzing and determining feature data of the target object existing in the related art.
Optionally, before obtaining the activity information of the target object in the video image, the information processing method according to the embodiment of the present invention includes: determining whether the video image is received or not according to a preset monitoring period;
wherein the video image includes: when the video data acquired by the preset image acquisition device is longer than a first preset time length, the acquired video data is segmented to obtain a video clip with the first preset time length. Here, the first preset duration may be determined by analysis performed by a person skilled in the art according to the real-time requirement; for example, the first preset time period is set to 1 hour.
The monitoring period may be set with reference to a real-time requirement, and for example, the monitoring period is set to 30 seconds.
Optionally, before obtaining the activity information of the target object in the video image, the information processing method according to the embodiment of the present invention further includes:
within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image;
when the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and performing coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
And 102, carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object.
Optionally, in the embodiment of the present invention, the statistically analyzing the activity state of the target object includes:
setting one or more groups of activity state reference parameters;
according to the set activity state reference parameters of each group, performing statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
wherein each set of the activity state reference parameters includes a corresponding: an activity duration threshold and/or an activity area size threshold.
Optionally, the determining whether each frame of image contains the target object according to the embodiment of the present invention includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold, analyzing and confirming whether the subsequent adjacent image contains the target object; when the pixel difference percentage between the initial image and a subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent image as a previous image according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is larger than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sequencing order so as to continuously confirm whether the target object is contained.
Optionally, after obtaining the active state, the information processing method according to the embodiment of the present invention further includes:
and respectively counting the activity states corresponding to the reference parameters of the activity states to obtain the activity frequency corresponding to each activity state.
Optionally, the target object in the embodiment of the present invention includes a mouse, and obtaining the activity state includes:
according to the acquired activity information, the activity duration of the rodents in the activity areas of all sizes is statistically analyzed;
and determining the activity state of the mouse in the activity areas of all sizes according to the statistical analysis result of the activity duration of the mouse in the activity areas of all sizes.
The embodiment of the invention can be applied to the analysis of the activity states of other animals influencing the health of human life and property, and also can be applied to the livestock production process of other animals.
Compared with the related art, the technical scheme of the application comprises the following steps: acquiring activity information of a target object in a video image; and carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object. According to the embodiment of the invention, the activity information of the target object is acquired from the video image, so that the efficiency of analyzing the activity state of the target object is improved.
Fig. 2 is a block diagram of an information processing apparatus according to an embodiment of the present invention, as shown in fig. 2, including: an acquisition unit and a statistical analysis unit; wherein, the first and the second end of the pipe are connected with each other,
the acquisition unit is used for: acquiring activity information of a target object in a video image;
optionally, the obtaining unit in the embodiment of the present invention is specifically configured to:
determining whether each frame of image contained in the video image contains the target object;
for each frame including the image of the target object, marking the target object according to a preset image identifier;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identifier.
The statistical analysis unit is used for: and carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object.
Optionally, the information processing apparatus according to the embodiment of the present invention further includes a monitoring unit, configured to: determining whether the video image is received or not according to a preset monitoring period;
wherein the video image comprises: when the video data acquired by the preset image acquisition device is longer than a first preset time length, the acquired video data is segmented to obtain a video clip with the first preset time length. The monitoring period may be set with reference to a real-time requirement, and for example, the monitoring period is set to 30 seconds.
Optionally, the statistical analysis unit in the embodiment of the present invention is specifically configured to:
setting one or more groups of activity state reference parameters;
according to the set activity state reference parameters of each group, performing statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
wherein each set of said activity state reference parameters comprises the corresponding: an activity duration threshold and/or an activity area size threshold.
Optionally, the information processing apparatus in the embodiment of the present invention further includes an initial image determining unit, configured to:
within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image;
when the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and performing coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
Optionally, the determining, by the obtaining unit in the embodiment of the present invention, whether each frame of image includes the target object includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold value, analyzing and confirming whether the subsequent adjacent image contains the target object; when the pixel difference percentage between the initial image and a subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent image as a previous image according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is greater than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sequencing order so as to continuously confirm whether the target object is contained.
Optionally, the statistical analysis unit in the embodiment of the present invention is further configured to:
and respectively counting the activity states corresponding to the reference parameters of the activity states to obtain the activity frequency corresponding to each activity state.
Optionally, in the embodiment of the present invention, the target object includes a mouse, and the statistical analysis unit is specifically configured to:
according to the obtained activity information, the activity duration of the rodents in the activity areas of various sizes is statistically analyzed;
and determining the activity state of the mouse in the activity areas of all sizes according to the statistical analysis result of the activity duration of the mouse in the activity areas of all sizes.
The embodiment of the invention also provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used for executing the information processing method.
An embodiment of the present invention further provides a terminal, including: a memory and a processor; wherein, the first and the second end of the pipe are connected with each other,
the processor is configured to execute program instructions in the memory;
the program instructions read on the processor to perform the following operations:
acquiring activity information of a target object in a video image;
and carrying out statistical analysis on the acquired activity information to obtain the activity state of the target object.
The method of the embodiment of the present invention is described in detail below by using application examples, which are only used for illustrating the present invention and are not used for limiting the protection scope of the present invention.
Application examples
The application example is described by taking a camera installed indoors as an example, and when the camera acquires video data, the acquired video data is divided by taking one hour as unit duration to obtain a video image processed by the application example. The video images obtained by cutting can be stored according to a preset storage path. In the application example, the video image can be processed in an encryption mode, a compression mode and the like and then transmitted, and the video image can be backed up and recorded according to the related technology.
After the application example cuts and obtains video data, the video data obtained by cutting are sent to a server or a device for analyzing the activity information of the target object; the server or the device for analyzing the target object activity information determines whether the video image is received according to a preset monitoring period and a preset monitoring period; here, the monitoring period may be set according to a real-time requirement for performing the activity information analysis; for example, 30 seconds can be set, and the occupation of bandwidth resources does not need to be considered because the information amount is small;
table 1 shows how to store the video image recording information, and as shown in table 1, the file number, the storage path, the start time, and the end time of the video image are recorded, and the transmission status of whether to transmit the video image to a server or a device that performs the target object activity information analysis is included.
File numbering Storage path Starting time End time Sending status
207 /data/207.avi 2017-12-26 05:00:00 2017-12-26 05:59:59 Has been processed
208 /data/208.avi 2017-12-26 06:00:00 2017-12-26 06:59:59 Untreated
TABLE 1
Within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image; here, the second preset duration may be determined based on the calculation and analysis speed, and it is required to ensure that the subsequently transmitted video image can be processed in time, and meanwhile, the video image is processed in time, so that the activity information of the target object can be obtained in real time.
In this application example, it may also be determined whether the processing of the activity information of the target object is completed according to the processing state, and table 2 is an example of performing the operation state analysis in this application example, as shown in table 1, the method includes: file number, time for entering an operation queue, operation starting time, operation ending time and operation state;
Figure BDA0001650428010000121
TABLE 2
When the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and carrying out coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
After determining an initial image and carrying out coordinate modeling, identifying and determining whether each frame of image contains a target object; the present application example determines whether each frame image contains a target object includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold, analyzing and confirming whether the subsequent adjacent image contains the target object; when the pixel difference percentage between the initial image and the subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent images as the previous images according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is larger than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sorting sequence so as to continuously confirm whether the target object is contained.
The present application example does not analyze the image frame with less change by the percentage of the pixel difference, that is, filters the image frame with less information amount by the percentage of the pixel difference;
after the target object is identified, the application example marks the target object according to a preset image identifier for each frame of image containing the target object;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identification.
Table 3 is an illustration of activity information recording performed by the present application example, and as shown in table 3, the activity information recording includes: coordinate numbers, file numbers, activity time and coordinates of activity positions; wherein, the coordinate label can be a number which is set according to the position set by the camera and analyzed by the person skilled in the art.
Coordinate numbering File numbering Time of activity Active position
102 207 2017-12-26 05:02:24 728,730,758,760
102 207 2017-12-26 05:02:25 748,736,770,758
102 207 2017-12-26 05:02:26 768,732,788,762
102 207 2017-12-26 05:02:27 750,716,774,738
102 207 2017-12-26 05:02:28 776,768,796,704
102 207 2017-12-26 05:02:29 796,656,822,690
102 207 2017-12-26 05:02:30 820,632,844,668
102 207 2017-12-26 05:02:31 816,608,842,626
102 207 2017-12-26 05:02:32 820,600,848,622
102 207 2017-12-26 05:02:33 820,582,856,616
102 207 2017-12-26 05:02:34 830,562,858,592
102 207 2017-12-26 05:02:35 868,554,898,584
102 207 2017-12-26 05:02:36 916,550,940,586
TABLE 3
After the activity time of the target object is recorded and obtained, the embodiment of the invention statistically analyzes the activity state of the target object; specifically, the method may include: setting one or more groups of activity state reference parameters;
according to the set activity state reference parameters of each group, carrying out statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
wherein each set of said activity state reference parameters comprises the corresponding: an activity duration threshold and/or an activity area size threshold.
In this application example, the active region of the target object includes:
after a target object appears in an image frame, marking the target object through a preset image identifier, recording the activity position of the target object and the activity time corresponding to the activity position frame by frame according to time sequence, and then determining the activity area of the target object according to the activity position corresponding to each activity time;
in this application example, it is assumed that two sets of active state reference parameters are set, including: a first activity duration threshold and a first activity zone size threshold; a second activity duration threshold and a second activity region size threshold; determining the activity state of the target object may comprise:
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area;
when the current activity area of the target object is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the target object is in a second activity state in the current activity area;
it should be noted that the size threshold of the active area needs to be analyzed, determined and adjusted based on the installation position of the camera and the activity degree and volume of the target object; the first activity duration threshold may be analytically determined based on an activity region size threshold based on a daily activity state of the target object.
For example, assume a pixel region with a first activity duration threshold of 5 seconds and a first activity region size threshold of 3 by 3 for a rodent; a pixel region having a second activity duration threshold of 10 seconds and a second activity region size threshold of 10 by 10; if it is determined, based on the relevant data, that the rodent is in the gnawing state in the current activity area when the first set of activity state reference parameters are satisfied, and the rodent is in the hot spot activity state in the current activity area when the second set of activity state reference parameters are satisfied, determining the activity state of the rodent may include:
when the current activity area of the rodent is smaller than a pixel area of 3 by 3 within 5 seconds, determining that the rodent is in a gnawing state in the current activity area;
and when the current activity area of the mouse is smaller than the pixel area of 10 times 10 within 10 seconds, determining that the mouse is in a hot spot activity state in the current activity area.
Based on the activity information in table 3, the present application example can insert the determination result of the activity status in table 3 to form the content in table 4, and in table 4, the gnawing status and the hot spot activity status are exemplified as the activity status:
Figure BDA0001650428010000151
TABLE 4
In the present application example, frequency statistics may be performed based on the activity areas and the activity states corresponding to the activity areas; and after the frequency statistics is completed, the frequency statistics can be marked and displayed in the image. The application example can also set reference parameters for judging the activity state of the rodent, such as the lingering activity, the crawling track and the like, and the rodent and the like can be more accurately analyzed through the parameter setting.
The application example can also be combined with the state of the mouse activity site to adjust the activity state reference parameters;
based on the mouse example description, the method can be applied to food storage areas such as fresh supermarkets and kitchens, and the monitoring analysis of the mouse activity and the subsequent treatment for preventing and controlling the mouse damage are realized through the activity state analysis; the damage of the rodent gnawing activity to the apparatus and the pollution of food materials are avoided.
The application example can timely realize the activity state analysis of the target object, and control processing including prevention, control, driving and the like can be carried out on the target object based on the activity state analysis.
The embodiment of the invention can be applied to the analysis of the activity states of other animals influencing the health of human life and property, and the activity state of the target object can be determined in time through the timely analysis of the activity state of the target object, so that the control processing of the target object is realized efficiently.
It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by a program instructing associated hardware (e.g., a processor) to perform the steps, and the program may be stored in a computer readable storage medium, such as a read only memory, a magnetic or optical disk, and the like. Alternatively, all or part of the steps of the above embodiments may also be implemented using one or more integrated circuits. Accordingly, each module/unit in the above embodiments may be implemented in hardware, for example, by an integrated circuit to implement its corresponding function, or in software, for example, by a processor executing a program/instruction stored in a memory to implement its corresponding function. The present invention is not limited to any specific form of combination of hardware and software.
Although the embodiments of the present invention have been described above, the above description is only for the purpose of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An information processing method characterized by comprising:
acquiring activity information of a target object in a video image;
the obtained activity information is subjected to statistical analysis to obtain the activity state of the target object;
the method further comprises the following steps:
setting at least two groups of activity state reference parameters; the at least two groups of activity state reference parameters comprise a first activity duration threshold, a first activity area size threshold, a second activity duration threshold and a second activity area size threshold;
wherein, the obtaining of the activity state of the target object by statistically analyzing the acquired activity information includes: according to the set activity state reference parameters of each group, performing statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
the active state comprises a first active state and a second active state;
the performing statistical analysis on the acquired activity information to obtain the activity states corresponding to the reference parameters of the activity states includes:
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area;
when the current activity area of the target object is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the target object is in a second activity state in the current activity area;
the target object comprises a mouse, the first activity duration threshold is smaller than the second activity duration threshold, and the first activity area size threshold is smaller than the second activity area size threshold;
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area; when the current activity area of the target object is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the target object is in the second activity state in the current activity area comprises:
when the current activity area of the rodent is smaller than the first activity area size threshold within the set first activity duration time threshold, determining that the rodent is in a gnawing state in the current activity area;
and when the current activity area of the mouse is smaller than the size threshold of the second activity area within the set second activity duration time threshold, determining that the mouse is in a hot spot activity state in the current activity area.
2. The information processing method according to claim 1, wherein the acquiring of the motion information of the target object in the video image includes:
determining whether each frame of image contained in the video image contains the target object;
for each frame including the image of the target object, marking the target object according to a preset image identifier;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identifier.
3. The information processing method according to claim 1 or 2, wherein before the acquiring of the motion information of the target object in the video image, the information processing method includes:
determining whether the video image is received or not according to a preset monitoring period;
wherein the video image comprises: when the video data acquired by the preset image acquisition device is longer than a first preset time length, the acquired video data is segmented to obtain a video clip with the first preset time length.
4. The information processing method according to claim 2, wherein before the acquiring of the motion information of the target object in the video image, the information processing method further comprises:
within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image;
when the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and performing coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
5. The information processing method according to claim 4, wherein the determining whether each frame image contains the target object includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold, analyzing and confirming whether the subsequent adjacent image contains the target object; when the pixel difference percentage between the initial image and a subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent image as a previous image according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is greater than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sequencing order so as to continuously confirm whether the target object is contained.
6. The information processing method according to claim 1, characterized by further comprising:
and respectively counting the activity states corresponding to the reference parameters of the activity states to obtain the activity frequency corresponding to each activity state.
7. An information processing apparatus characterized by comprising: an acquisition unit and a statistical analysis unit; wherein the content of the first and second substances,
the acquisition unit is used for: acquiring activity information of a target object in a video image;
the statistical analysis unit is used for: the obtained activity information is subjected to statistical analysis to obtain the activity state of the target object;
the statistical analysis unit is further configured to:
setting at least two groups of activity state reference parameters; the at least two groups of activity state reference parameters comprise a first activity duration threshold, a first activity area size threshold, a second activity duration threshold and a second activity area size threshold;
wherein, the statistical analysis unit is used for statistically analyzing the acquired activity information, and the acquiring the activity state of the target object comprises: according to the set activity state reference parameters of each group, carrying out statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
the active state comprises a first active state and a second active state;
the performing statistical analysis on the obtained activity information to obtain the activity status corresponding to each activity status reference parameter includes:
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area;
when the current activity area of the target object is smaller than a second activity area size threshold within a set second activity duration threshold, determining that the target object is in a second activity state in the current activity area;
the target object comprises a mouse, the first activity duration threshold is smaller than the second activity duration threshold, and the first activity area size threshold is smaller than the second activity area size threshold;
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area; when the current activity area of the target object is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the target object is in the second activity state in the current activity area comprises:
when the current activity area of the rodent is smaller than the first activity area size threshold within the set first activity duration time threshold, determining that the rodent is in a gnawing state in the current activity area;
and when the current activity area of the mouse is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the mouse is in a hot spot activity state in the current activity area.
8. The information processing apparatus according to claim 7, wherein the acquisition unit is specifically configured to:
determining whether each frame of image contained in the video image contains the target object;
for each frame of image containing the target object, marking the target object according to a preset image identifier;
and recording the activity position of the target object and the activity time corresponding to the activity position frame by frame for each frame of image marked with the image identifier.
9. The information processing apparatus according to claim 7 or 8, characterized in that the information processing apparatus further comprises a monitoring unit configured to: determining whether the video image is received or not according to a preset monitoring period;
wherein the video image comprises: when the video data acquired by the preset image acquisition device is longer than a first preset time length, the acquired video data is segmented to obtain a video clip with the first preset time length.
10. The information processing apparatus according to claim 8, further comprising an initial image determination unit configured to:
within a second preset time length after the video image is received, comparing two adjacent frames of images frame by frame according to the time sequencing sequence of each frame of image contained in the video image;
when the pixel difference percentage of two adjacent frames of images is smaller than a preset first percentage threshold value, determining that the next frame of image is an initial image;
and performing coordinate modeling according to the determined initial image so as to record the activity position according to the coordinate modeling.
11. The information processing apparatus according to claim 10, wherein the acquisition unit configured to determine whether each frame image contains the target object includes:
comparing the initial image with a subsequent adjacent image, and when the percentage of pixel difference between the initial image and the subsequent adjacent image is greater than a preset second percentage threshold, analyzing and confirming whether the subsequent adjacent image contains the target object; when the pixel difference percentage between the initial image and a subsequent adjacent image is smaller than a preset second percentage threshold value, determining a subsequent group of adjacent images by taking the subsequent adjacent image as a previous image according to the time sequencing order;
for the determined next group of adjacent images, when the percentage of the pixel difference of the adjacent images is larger than a preset second percentage threshold value, analyzing and confirming whether the next adjacent images contain the target object; and when the percentage of the pixel difference of the adjacent images is smaller than a preset second percentage threshold value, determining a next group of adjacent images by taking the next adjacent images as the previous images according to the time sequencing order so as to continuously confirm whether the target object is contained.
12. The information processing apparatus according to claim 7, wherein the statistical analysis unit is further configured to:
and respectively counting the activity states corresponding to the reference parameters of the activity states to obtain the activity frequency corresponding to each activity state.
13. The information processing apparatus according to claim 7, 8, 10, or 11, wherein the target object includes a rodent, and the statistical analysis unit is specifically configured to:
according to the acquired activity information, the activity duration of the rodents in the activity areas of all sizes is statistically analyzed;
and determining the activity state of the mouse in the activity areas of all sizes according to the statistical analysis result of the activity duration of the mouse in the activity areas of all sizes.
14. A computer storage medium having stored therein computer-executable instructions for executing the information processing method according to any one of claims 1 to 6.
15. A terminal, comprising: a memory and a processor; wherein, the first and the second end of the pipe are connected with each other,
the processor is configured to execute program instructions in the memory;
the program instructions read on the processor to perform the following operations:
acquiring activity information of a target object in a video image;
the obtained activity information is subjected to statistical analysis to obtain the activity state of the target object;
further comprising:
setting at least two groups of activity state reference parameters; the at least two groups of activity state reference parameters comprise a first activity duration threshold, a first activity area size threshold, a second activity duration threshold and a second activity area size threshold;
wherein, the obtaining of the activity state of the target object by statistically analyzing the acquired activity information includes: according to the set activity state reference parameters of each group, performing statistical analysis on the acquired activity information to obtain the activity state corresponding to each activity state reference parameter;
the active state comprises a first active state and a second active state;
the performing statistical analysis on the obtained activity information to obtain the activity status corresponding to each activity status reference parameter includes:
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area;
when the current activity area of the target object is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the target object is in a second activity state in the current activity area;
the target object comprises a mouse, the first activity duration threshold is smaller than the second activity duration threshold, and the first activity area size threshold is smaller than the second activity area size threshold;
when the current activity area of the target object is smaller than the first activity area size threshold within the set first activity duration threshold, determining that the target object is in a first activity state in the current activity area; when the current activity area of the target object is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the target object is in the second activity state in the current activity area comprises:
when the current activity area of the rodent is smaller than the first activity area size threshold within the set first activity duration time threshold, determining that the rodent is in a gnawing state in the current activity area;
and when the current activity area of the mouse is smaller than the second activity area size threshold within the set second activity duration threshold, determining that the mouse is in a hot spot activity state in the current activity area.
CN201810419877.2A 2018-05-04 2018-05-04 Information processing method and device, computer storage medium and terminal Active CN108664912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810419877.2A CN108664912B (en) 2018-05-04 2018-05-04 Information processing method and device, computer storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810419877.2A CN108664912B (en) 2018-05-04 2018-05-04 Information processing method and device, computer storage medium and terminal

Publications (2)

Publication Number Publication Date
CN108664912A CN108664912A (en) 2018-10-16
CN108664912B true CN108664912B (en) 2022-12-20

Family

ID=63781845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810419877.2A Active CN108664912B (en) 2018-05-04 2018-05-04 Information processing method and device, computer storage medium and terminal

Country Status (1)

Country Link
CN (1) CN108664912B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539974B (en) * 2020-04-07 2022-11-11 北京明略软件系统有限公司 Method and device for determining track, computer storage medium and terminal

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101584585A (en) * 2009-06-08 2009-11-25 浙江大学 Experimental system for the determination of white rat behavior
CN102046076A (en) * 2008-04-03 2011-05-04 Kai医药公司 Non-contact physiologic motion sensors and methods for use
CN102356753A (en) * 2011-07-01 2012-02-22 浙江大学 Infrared thermal image tracking-based measuring and experimental system for unbiased animal behaviors
WO2012057860A1 (en) * 2010-10-28 2012-05-03 Medtronic, Inc. Heart failure monitoring and notification
CN103577832A (en) * 2012-07-30 2014-02-12 华中科技大学 People flow statistical method based on spatio-temporal context
CN104075731A (en) * 2013-03-12 2014-10-01 阿迪达斯股份公司 Methods Of Determining Performance Information For Individuals And Sports Objects
CN104504245A (en) * 2014-12-04 2015-04-08 吉林大学 Method of utilizing GPS trip survey data to identify trips and activities
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment
CN106462027A (en) * 2014-06-23 2017-02-22 本田技研工业株式会社 System and method for responding to driver state
CN106845620A (en) * 2016-12-19 2017-06-13 江苏慧眼数据科技股份有限公司 A kind of passenger flow counting method based on quene state analysis
CN107157458A (en) * 2017-05-25 2017-09-15 中国科学院合肥物质科学研究院 The sensor-based system and method for a kind of animal individual feed intake and health monitoring
CN107172590A (en) * 2017-06-30 2017-09-15 北京奇虎科技有限公司 Moving state information processing method, device and mobile terminal based on mobile terminal
CN107197161A (en) * 2017-06-30 2017-09-22 北京金山安全软件有限公司 Image data processing method and device, electronic equipment and storage medium
CN107454395A (en) * 2017-08-23 2017-12-08 上海安威士科技股份有限公司 A kind of high-definition network camera and intelligent code stream control method
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100565557C (en) * 2008-06-06 2009-12-02 重庆大学 System for tracking infrared human body target based on corpuscle dynamic sampling model
US10555676B2 (en) * 2009-05-20 2020-02-11 Sotera Wireless, Inc. Method for generating alarms/alerts based on a patient's posture and vital signs
US9122931B2 (en) * 2013-10-25 2015-09-01 TCL Research America Inc. Object identification system and method
CN106686452B (en) * 2016-12-29 2020-03-27 北京奇艺世纪科技有限公司 Method and device for generating dynamic picture
CN107346415A (en) * 2017-06-08 2017-11-14 小草数语(北京)科技有限公司 Method of video image processing, device and monitoring device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102046076A (en) * 2008-04-03 2011-05-04 Kai医药公司 Non-contact physiologic motion sensors and methods for use
CN101584585A (en) * 2009-06-08 2009-11-25 浙江大学 Experimental system for the determination of white rat behavior
WO2012057860A1 (en) * 2010-10-28 2012-05-03 Medtronic, Inc. Heart failure monitoring and notification
CN102356753A (en) * 2011-07-01 2012-02-22 浙江大学 Infrared thermal image tracking-based measuring and experimental system for unbiased animal behaviors
CN103577832A (en) * 2012-07-30 2014-02-12 华中科技大学 People flow statistical method based on spatio-temporal context
CN104075731A (en) * 2013-03-12 2014-10-01 阿迪达斯股份公司 Methods Of Determining Performance Information For Individuals And Sports Objects
CN106462027A (en) * 2014-06-23 2017-02-22 本田技研工业株式会社 System and method for responding to driver state
CN104504245A (en) * 2014-12-04 2015-04-08 吉林大学 Method of utilizing GPS trip survey data to identify trips and activities
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment
CN106845620A (en) * 2016-12-19 2017-06-13 江苏慧眼数据科技股份有限公司 A kind of passenger flow counting method based on quene state analysis
CN107157458A (en) * 2017-05-25 2017-09-15 中国科学院合肥物质科学研究院 The sensor-based system and method for a kind of animal individual feed intake and health monitoring
CN107172590A (en) * 2017-06-30 2017-09-15 北京奇虎科技有限公司 Moving state information processing method, device and mobile terminal based on mobile terminal
CN107197161A (en) * 2017-06-30 2017-09-22 北京金山安全软件有限公司 Image data processing method and device, electronic equipment and storage medium
CN107564034A (en) * 2017-07-27 2018-01-09 华南理工大学 The pedestrian detection and tracking of multiple target in a kind of monitor video
CN107454395A (en) * 2017-08-23 2017-12-08 上海安威士科技股份有限公司 A kind of high-definition network camera and intelligent code stream control method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于车辆遮挡模型的车流量统计算法;齐美彬等;《仪器仪表学报》;20100615(第06期);第137-143页 *
香樟枝叶挥发物对小白鼠自发行为影响;王艳英等;《生态环境学报》;20120818(第08期);第41-46页 *

Also Published As

Publication number Publication date
CN108664912A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
Wurtz et al. Recording behaviour of indoor-housed farm animals automatically using machine vision technology: A systematic review
Dubé et al. A review of network analysis terminology and its application to foot‐and‐mouth disease modelling and policy development
CN112655019A (en) Monitoring livestock in an agricultural enclosure
Roche et al. How do resources influence control measures during a simulated outbreak of foot and mouth disease in Australia?
CN111709374B (en) Bird condition detection method, bird condition detection device, computer equipment and storage medium
CN111709372B (en) Bird repelling method and device, computer equipment and storage medium
CN110830772A (en) Kitchen video analysis resource scheduling method, device and system
Dreyfus et al. Risk factors for new infection with Leptospira in meat workers in New Zealand
CN109886555A (en) The monitoring method and device of food safety
US11967154B2 (en) Video analytics to detect instances of possible animal abuse based on mathematical stick figure models
Pozo et al. Analysis of the cattle movement network and its association with the risk of bovine tuberculosis at the farm level in Castilla y Leon, Spain
Boelaert et al. EU-wide monitoring of biological hazards along the food chain: achievements, challenges and EFSA vision for the future
CN108664912B (en) Information processing method and device, computer storage medium and terminal
CN109784239A (en) The recognition methods of winged insect quantity and device
CN108460370B (en) Fixed poultry life information alarm device
CN112150498A (en) Method and device for determining posture information, storage medium and electronic device
KR20210067602A (en) Method for Establishing Prevention Boundary of Epidemics of Livestock Based On Image Information Analysis
CN110427872A (en) A kind of livestock monitoring method, device and equipment
Alton et al. Suitability of sentinel abattoirs for syndromic surveillance using provincially inspected bovine abattoir condemnation data
KR102624927B1 (en) System and method for diarrhea signs dection of animals and diarrhea risk prediction
CN113869848A (en) Intelligent pasture management method and system based on big data
CN108681724A (en) Farming operations monitoring method and device
CN106469347A (en) Inspection information acquisition terminal and inspection result auxiliary judgement method after a kind of domestic animal government official
Cotterill et al. Parsing the effects of demography, climate and management on recurrent brucellosis outbreaks in elk
CN108717525A (en) A kind of information processing method, device, computer storage media and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100086 room 084, room 1704, 17 / F, Qingyun contemporary building, building 9, Manting Fangyuan community, Qingyun Li, Haidian District, Beijing

Patentee after: Beijing minglue Zhaohui Technology Co.,Ltd.

Address before: 100086 room 084, room 1704, 17 / F, Qingyun contemporary building, building 9, Manting Fangyuan community, Qingyun Li, Haidian District, Beijing

Patentee before: BEIJING SUPERTOOL INTERNET TECHNOLOGY LTD.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230804

Address after: 200232 unit 5b06, floor 5, building 2, No. 277, Longlan Road, Xuhui District, Shanghai

Patentee after: Shanghai Mingsheng Pinzhi Artificial Intelligence Technology Co.,Ltd.

Address before: 100086 room 084, room 1704, 17 / F, Qingyun contemporary building, building 9, Manting Fangyuan community, Qingyun Li, Haidian District, Beijing

Patentee before: Beijing minglue Zhaohui Technology Co.,Ltd.