CN112417209A - Real-time video annotation method, system, terminal and medium based on browser - Google Patents

Real-time video annotation method, system, terminal and medium based on browser Download PDF

Info

Publication number
CN112417209A
CN112417209A CN202011308570.9A CN202011308570A CN112417209A CN 112417209 A CN112417209 A CN 112417209A CN 202011308570 A CN202011308570 A CN 202011308570A CN 112417209 A CN112417209 A CN 112417209A
Authority
CN
China
Prior art keywords
video stream
user
real
video
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011308570.9A
Other languages
Chinese (zh)
Inventor
王子泰
李凡平
石柱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Issa Data Technology Co ltd
Beijing Yisa Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Original Assignee
Anhui Issa Data Technology Co ltd
Beijing Yisa Technology Co ltd
Qingdao Yisa Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Issa Data Technology Co ltd, Beijing Yisa Technology Co ltd, Qingdao Yisa Data Technology Co Ltd filed Critical Anhui Issa Data Technology Co ltd
Priority to CN202011308570.9A priority Critical patent/CN112417209A/en
Publication of CN112417209A publication Critical patent/CN112417209A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The invention discloses a real-time video labeling method based on a browser, which comprises the following steps: a client acquires a real-time video stream; the client processes the video stream and plays the video after processing; the client acquires the user-defined operation of the user on the video stream and sends the operation data to the background server for storage; and the client acquires the successful storage information sent by the background server and displays the video stream after the user-defined operation when the video stream is reopened. The method realizes the processing of the user-defined operation data at the client, and reduces the processing pressure of the background server. The user can directly select and mark the video area at the client, so as to mark important information, the important information can be checked in time and quickly in the later period, the conformity between the user and the system is optimized, the video interface can be monitored in time and clearly, and the information query efficiency is greatly improved.

Description

Real-time video annotation method, system, terminal and medium based on browser
Technical Field
The invention relates to the technical field of data processing, in particular to a real-time video annotation method, a real-time video annotation system, a real-time video annotation terminal and a real-time video annotation medium based on a browser.
Background
With the continuous development of video related technologies and the maturity of technologies such as real-time video, when it is necessary to analyze, monitor and collect these massive video data and analyze the video area in the interface, the user cannot distinguish the monitoring focus in time and is liable to lose important information in the case of excessive analysis.
Disclosure of Invention
Aiming at the defects in the prior art, the embodiment of the invention provides a real-time video labeling method, a real-time video labeling system, a real-time video labeling terminal and a real-time video labeling medium based on a browser, which can be used for framing and labeling key information in a video and are convenient for a user to check the key information.
In a first aspect, a real-time video annotation method based on a browser provided in an embodiment of the present invention includes the following steps:
a client acquires a real-time video stream;
the client processes the video stream and plays the video after processing;
the client acquires the user-defined operation of the user on the video stream and sends the operation data to the background server for storage;
and acquiring successful storage information sent by the background server, and displaying the video stream after the user-defined operation when the video stream is reopened.
Further, a specific method for the client to obtain the user-defined operation of the user on the video stream includes:
the client monitors an action event of a user mouse;
drawing a corresponding graph according to the point position coordinate value of the action event;
operation data corresponding to the graph is obtained.
Further, the action event includes: pressing the mouse key, lifting the mouse key and hovering.
Further, the client processes the video stream and adopts a videojs plug-in to process the real-time video stream.
In a second aspect, a real-time video annotation system based on a browser provided in an embodiment of the present invention includes a video stream acquisition module, a video stream processing module, a video stream customization module, and a display module, wherein,
the video stream acquisition module is used for acquiring a real-time video stream;
the video stream processing module is used for processing the video stream and playing the video after processing;
the video stream self-defining module is used for acquiring the self-defining operation of a user on the video stream and sending the operation data to the background server for storage;
and the display module is used for displaying the video stream after the operation is customized when the video stream is reopened.
Further, the video stream self-defining module comprises a data processing unit, wherein the data processing unit is used for monitoring an action event of a mouse of a user, drawing a corresponding graph according to a point coordinate value of the action event, and obtaining operation data corresponding to the graph.
Further, the action event includes: pressing the mouse key, lifting the mouse key and hovering.
Further, the video stream processing module processes the real-time video stream by using a video plug-in.
In a third aspect, an intelligent terminal provided in an embodiment of the present invention includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the foregoing embodiment.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program including program instructions, which, when executed by a processor, cause the processor to execute the method described in the above embodiment.
The invention has the beneficial effects that:
the real-time video annotation method, the real-time video annotation system, the real-time video annotation terminal and the real-time video annotation medium based on the browser, provided by the embodiment of the invention, realize the processing of user-defined operation data at a client side and reduce the processing pressure of a background server. The user can directly select and mark the video area at the client, so as to mark important information, the important information can be checked in time and quickly in the later period, the conformity between the user and the system is optimized, the video interface can be monitored in time and clearly, and the information query efficiency is greatly improved.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flowchart illustrating a method for real-time browser-based video annotation according to a first embodiment of the present invention;
FIG. 2 is a block diagram illustrating a real-time video annotation system based on a browser according to a second embodiment of the present invention;
fig. 3 shows a block diagram of an intelligent terminal according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
As shown in fig. 1, a flowchart of a method for real-time video annotation based on a browser according to a first embodiment of the present invention is shown, where the method includes the following steps:
and S1, the client acquires the real-time video stream.
The anyRTC provides diversified audio and video solutions aiming at different scene requirements, core services comprise interactive live broadcast, multi-person audio and video conference, P2P audio and video calling, real-time live broadcast, intelligent scheduling, interactive whiteboard, online education and the like, and the requirements for audio and video services in the market are met. The anyRTC always pushes the WebRTC technical scheme to upgrade and modify the original audio and video system, so that the use threshold of the user on the audio and video technology is reduced. Aiming at different scene requirements, the real-time video stream in the embodiment can be audio and video data such as interactive live broadcast, multi-person audio and video conference, P2P audio and video calling, real-time live broadcast, intelligent scheduling, interactive whiteboard and online education, and the method can meet the requirements of self-defined labeling on various audio and video services in the market.
And S2, the client processes the video stream and plays the video after processing.
And the client side processes the real-time video stream by adopting a video. JS is a general JS library embedded in a webpage of a video player, and automatically detects the support condition of the browser on HTML5, and automatically plays the video by using a Flash player if HTML5 is not supported.
And S3, the client acquires the user-defined operation of the user on the video stream and sends the operation data to the background server for storage.
Specifically, the specific method for the client to obtain the user-defined operation of the user on the video stream includes: the client monitors an action event of a user mouse; drawing a corresponding graph according to the point position coordinate value of the action event; operation data corresponding to the graph is obtained. The user-defined operation comprises drawing a picture frame, marking characters and/or marking graphic marks and the like.
And S4, the client acquires the successful storage information sent by the background server and displays the video stream after the user-defined operation when the video stream is reopened.
The real-time video annotation method based on the browser provided by the embodiment of the invention realizes the processing of user-defined operation data at the client and reduces the processing pressure of the background server. The user can directly select and mark the video area at the client, so as to mark important information, the important information can be checked in time and quickly in the later period, the conformity between the user and the system is optimized, the video interface can be monitored in time and clearly, and the information query efficiency is greatly improved.
In the first embodiment, a real-time video annotation method based on a browser is provided, and correspondingly, the application also provides a real-time video annotation system based on a browser. Please refer to fig. 2, which is a block diagram illustrating a real-time video annotation system based on a browser according to a second embodiment of the present invention. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
As shown in fig. 2, a block diagram of a real-time video annotation system based on a browser according to a second embodiment of the present invention is shown, and the system includes a video stream obtaining module, a video stream processing module, a video stream customizing module, and a display module, where the video stream obtaining module is configured to obtain a real-time video stream; the video stream processing module is used for processing the video stream and playing the video after processing; the video stream self-defining module is used for acquiring the self-defining operation of a user on the video stream and sending the operation data to the background server for storage; and the display module is used for displaying the video stream after the operation is customized when the video stream is reopened. And the video stream processing module processes the real-time video stream by adopting a video plug-in.
In this embodiment, the video stream customization module includes a data processing unit, and the data processing unit is configured to monitor an action event of a mouse of a user, and draw a corresponding graph according to a point coordinate value of the action event to obtain operation data corresponding to the graph. Wherein the action event comprises: pressing the mouse key, lifting the mouse key and hovering.
The real-time video annotation system based on the browser provided by the embodiment of the invention realizes the processing of user-defined operation data at the client and reduces the processing pressure of the background server. The user can directly select and mark the video area at the client, so as to mark important information, the important information can be checked in time and quickly in the later period, the conformity between the user and the system is optimized, the video interface can be monitored in time and clearly, and the information query efficiency is greatly improved.
As shown in fig. 3, a block diagram of an intelligent terminal according to a third embodiment of the present invention is further provided, where the intelligent terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, the memory is used for storing a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the method described in the foregoing embodiment.
It should be understood that in the embodiments of the present invention, the Processor may be a Central Processing Unit (CPU), and the Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device may include a display (LCD, etc.), a speaker, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the processor, the input device, and the output device described in the embodiments of the present invention may execute the implementation described in the method embodiments provided in the embodiments of the present invention, and may also execute the implementation described in the system embodiments in the embodiments of the present invention, which is not described herein again.
The invention also provides an embodiment of a computer-readable storage medium, in which a computer program is stored, which computer program comprises program instructions that, when executed by a processor, cause the processor to carry out the method described in the above embodiment.
The computer readable storage medium may be an internal storage unit of the terminal described in the foregoing embodiment, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. A real-time video annotation method based on a browser is characterized by comprising the following steps:
a client acquires a real-time video stream;
the client processes the video stream and plays the video after processing;
the client acquires the user-defined operation of the user on the video stream and sends the operation data to the background server for storage;
and the client acquires the successful storage information sent by the background server and displays the video stream after the user-defined operation when the video stream is reopened.
2. The method of claim 1, wherein the specific method for the client to obtain the user-defined operation on the video stream comprises:
the client monitors an action event of a user mouse;
drawing a corresponding graph according to the point position coordinate value of the action event;
operation data corresponding to the graph is obtained.
3. The method of claim 2, wherein the actionable event comprises: pressing the mouse key, lifting the mouse key and hovering.
4. The method of claim 1, wherein the client processing the video stream uses a videojs plug-in to process the real-time video stream.
5. A real-time video labeling system based on a browser is characterized by comprising a video stream acquisition module, a video stream processing module, a video stream self-defining module and a display module, wherein,
the video stream acquisition module is used for acquiring a real-time video stream;
the video stream processing module is used for processing the video stream and playing the video after processing;
the video stream self-defining module is used for acquiring the self-defining operation of a user on the video stream and sending the operation data to the background server for storage;
and the display module is used for displaying the video stream after the operation is customized when the video stream is reopened.
6. The system of claim 5, wherein the video stream customization module comprises a data processing unit, and the data processing unit is configured to monitor an action event of a mouse of a user, and draw a corresponding graph according to a point coordinate value of the action event to obtain operation data corresponding to the graph.
7. The system of claim 6, wherein the actionable event comprises: pressing the mouse key, lifting the mouse key and hovering.
8. The system of claim 5, wherein the video stream processing module processes the real-time video stream using a videojs plug-in.
9. An intelligent terminal comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being adapted to store a computer program, the computer program comprising program instructions, characterized in that the processor is configured to invoke the program instructions to perform the method according to any of claims 1-4.
10. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-4.
CN202011308570.9A 2020-11-20 2020-11-20 Real-time video annotation method, system, terminal and medium based on browser Pending CN112417209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011308570.9A CN112417209A (en) 2020-11-20 2020-11-20 Real-time video annotation method, system, terminal and medium based on browser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308570.9A CN112417209A (en) 2020-11-20 2020-11-20 Real-time video annotation method, system, terminal and medium based on browser

Publications (1)

Publication Number Publication Date
CN112417209A true CN112417209A (en) 2021-02-26

Family

ID=74774500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308570.9A Pending CN112417209A (en) 2020-11-20 2020-11-20 Real-time video annotation method, system, terminal and medium based on browser

Country Status (1)

Country Link
CN (1) CN112417209A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038265A (en) * 2021-03-01 2021-06-25 创新奇智(北京)科技有限公司 Video annotation processing method and device, electronic equipment and storage medium
CN114172871A (en) * 2021-12-13 2022-03-11 以萨技术股份有限公司 Data processing system, method and storage medium based on video violation detection

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104092958A (en) * 2014-07-01 2014-10-08 广东威创视讯科技股份有限公司 Video signal labeling method, system and device
CN105635519A (en) * 2015-06-15 2016-06-01 广州市动景计算机科技有限公司 Video processing method, device and system
CN107995538A (en) * 2017-12-18 2018-05-04 威创集团股份有限公司 Video annotation method and system
CN110024412A (en) * 2017-11-10 2019-07-16 腾讯科技(深圳)有限公司 A kind of methods, devices and systems of net cast
CN110443294A (en) * 2019-07-25 2019-11-12 丰图科技(深圳)有限公司 Video labeling method, device, server, user terminal and storage medium
CN110620892A (en) * 2018-06-20 2019-12-27 达音网络科技(上海)有限公司 Techniques for video annotation in video communications
CN111093092A (en) * 2019-12-17 2020-05-01 成都通甲优博科技有限责任公司 Video processing method, device, system, server and readable storage medium
CN111416989A (en) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 Video live broadcast method and system and electronic equipment
CN111698568A (en) * 2020-06-23 2020-09-22 南京微关爱应用行为分析研究院有限公司 Interactive video labeling method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104092958A (en) * 2014-07-01 2014-10-08 广东威创视讯科技股份有限公司 Video signal labeling method, system and device
CN105635519A (en) * 2015-06-15 2016-06-01 广州市动景计算机科技有限公司 Video processing method, device and system
CN110024412A (en) * 2017-11-10 2019-07-16 腾讯科技(深圳)有限公司 A kind of methods, devices and systems of net cast
CN107995538A (en) * 2017-12-18 2018-05-04 威创集团股份有限公司 Video annotation method and system
CN110620892A (en) * 2018-06-20 2019-12-27 达音网络科技(上海)有限公司 Techniques for video annotation in video communications
CN110443294A (en) * 2019-07-25 2019-11-12 丰图科技(深圳)有限公司 Video labeling method, device, server, user terminal and storage medium
CN111093092A (en) * 2019-12-17 2020-05-01 成都通甲优博科技有限责任公司 Video processing method, device, system, server and readable storage medium
CN111416989A (en) * 2020-04-28 2020-07-14 北京金山云网络技术有限公司 Video live broadcast method and system and electronic equipment
CN111698568A (en) * 2020-06-23 2020-09-22 南京微关爱应用行为分析研究院有限公司 Interactive video labeling method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038265A (en) * 2021-03-01 2021-06-25 创新奇智(北京)科技有限公司 Video annotation processing method and device, electronic equipment and storage medium
CN113038265B (en) * 2021-03-01 2022-09-20 创新奇智(北京)科技有限公司 Video annotation processing method and device, electronic equipment and storage medium
CN114172871A (en) * 2021-12-13 2022-03-11 以萨技术股份有限公司 Data processing system, method and storage medium based on video violation detection

Similar Documents

Publication Publication Date Title
CN112417209A (en) Real-time video annotation method, system, terminal and medium based on browser
CN112906568B (en) Dynamic threshold management method, system, electronic device and medium
US20160216797A1 (en) Method for capturing screen content of mobile terminal and device thereof
CN108924381B (en) Image processing method, image processing apparatus, and computer readable medium
CN110262904B (en) Data acquisition method and device
CN111401206A (en) Panorama sharing method, system, device and medium
CN111626229A (en) Object management method, device, machine readable medium and equipment
CN108197002A (en) Mobile equipment is without burying point data statistical method, system, terminal and medium
CN110889057B (en) Service data visualization method and service object visualization device
CN112153324A (en) Monitoring video display method, device and system
KR20170043944A (en) Display apparatus and method of controlling thereof
CN112417197B (en) Sorting method, sorting device, machine readable medium and equipment
CN115334291A (en) Tunnel monitoring method and device based on hundred million-level pixel panoramic compensation
CN111008842B (en) Tea detection method, system, electronic equipment and machine-readable medium
CN111626369B (en) Face recognition algorithm effect evaluation method and device, machine readable medium and equipment
CN112596846A (en) Method and device for determining interface display content, terminal equipment and storage medium
WO2016173136A1 (en) Terminal application processing method and device thereof
CN112347982A (en) Video-based unsupervised difficult case data mining method, device, medium and equipment
CN106155863A (en) Terminal anticipatory behavior control method and terminal
CN112257581A (en) Face detection method, device, medium and equipment
TWI736060B (en) High-resolution video image processing method, device, and electronic device
CN110659358A (en) Knowledge network management method, device, equipment and medium based on financial business
CN112995488B (en) High-resolution video image processing method and device and electronic equipment
CN110633117A (en) Data processing method and device, electronic equipment and readable medium
CN116468883B (en) High-precision image data volume fog recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant after: Qingdao Issa Technology Co.,Ltd.

Applicant after: Anhui Issa Data Technology Co.,Ltd.

Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING YISA TECHNOLOGY Co.,Ltd.

Applicant before: Anhui Issa Data Technology Co.,Ltd.

Address after: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant after: Issa Technology Co.,Ltd.

Applicant after: Anhui Issa Data Technology Co.,Ltd.

Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant before: Qingdao Issa Technology Co.,Ltd.

Applicant before: Anhui Issa Data Technology Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266400 Room 302, building 3, Office No. 77, Lingyan Road, Huangdao District, Qingdao, Shandong Province

Applicant after: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant after: Issa Technology Co.,Ltd.

Applicant after: Anhui Issa Data Technology Co.,Ltd.

Address before: 266000 3rd floor, building 3, optical valley software park, 396 Emeishan Road, Huangdao District, Qingdao City, Shandong Province

Applicant before: QINGDAO YISA DATA TECHNOLOGY Co.,Ltd.

Applicant before: Issa Technology Co.,Ltd.

Applicant before: Anhui Issa Data Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226