<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://c4d.lias-lab.fr/index.php?action=history&amp;feed=atom&amp;title=WP3-37</id>
	<title>WP3-37 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://c4d.lias-lab.fr/index.php?action=history&amp;feed=atom&amp;title=WP3-37"/>
	<link rel="alternate" type="text/html" href="https://c4d.lias-lab.fr/index.php?title=WP3-37&amp;action=history"/>
	<updated>2026-04-07T01:17:46Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.37.1</generator>
	<entry>
		<id>https://c4d.lias-lab.fr/index.php?title=WP3-37&amp;diff=1007&amp;oldid=prev</id>
		<title>Grolleaue: Created page with &quot;=Video and data analytics= {|class=&quot;wikitable&quot; |  ID|| WP3-37 |- |   Contributor	|| Aitek (AI) |- |   Levels	|| Function |- |   Require	|| 	Onboard camera |- |   Provide		||  |- |   Input		 | * Video streams collected by onboard cameras * (optional) Other data collected by the drones (e.g. GPS position) |- |   Output		||  * Detection and localization of targets. Such targets will be defined in details according to the applicative requirements defined in UC5. * Detection...&quot;</title>
		<link rel="alternate" type="text/html" href="https://c4d.lias-lab.fr/index.php?title=WP3-37&amp;diff=1007&amp;oldid=prev"/>
		<updated>2023-03-10T17:13:05Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;=Video and data analytics= {|class=&amp;quot;wikitable&amp;quot; |  ID|| WP3-37 |- |   Contributor	|| Aitek (AI) |- |   Levels	|| Function |- |   Require	|| 	Onboard camera |- |   Provide		||  |- |   Input		 | * Video streams collected by onboard cameras * (optional) Other data collected by the drones (e.g. GPS position) |- |   Output		||  * Detection and localization of targets. Such targets will be defined in details according to the applicative requirements defined in UC5. * Detection...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;=Video and data analytics=&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|  ID|| WP3-37&lt;br /&gt;
|-&lt;br /&gt;
|   Contributor	|| Aitek (AI)&lt;br /&gt;
|-&lt;br /&gt;
|   Levels	|| Function&lt;br /&gt;
|-&lt;br /&gt;
|   Require	|| 	Onboard camera&lt;br /&gt;
|-&lt;br /&gt;
|   Provide		|| &lt;br /&gt;
|-&lt;br /&gt;
|   Input		&lt;br /&gt;
|&lt;br /&gt;
* Video streams collected by onboard cameras&lt;br /&gt;
* (optional) Other data collected by the drones (e.g. GPS position)&lt;br /&gt;
|-&lt;br /&gt;
|   Output		|| &lt;br /&gt;
* Detection and localization of targets. Such targets will be defined in details according to the applicative requirements defined in UC5.&lt;br /&gt;
* Detection of relevant information about targets (e.g. size). Such information will be defined in details according to the applicative requirements defined in UC5.&lt;br /&gt;
|-&lt;br /&gt;
|   C4D building block		|| &lt;br /&gt;
|-&lt;br /&gt;
|   TRL		|| 6&lt;br /&gt;
|-&lt;br /&gt;
| Contact || Stefano Delucchi - sdelucchi@aitek.it&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=General Description=&lt;br /&gt;
SW component that implements video analysis algorithms based on Deep Learning approaches. It will be used to process RGB (mainly) and infrared (eventually) images.&lt;br /&gt;
&lt;br /&gt;
Definition, training and implementation of AI based algorithms for targets detection, localization and classification.&lt;br /&gt;
&lt;br /&gt;
Such SW component is defined as much general as possible but its implementation and demonstration has been done in the scope of the Smart Agriculture use case for the detection of the individual artichoke plants, and of the relative production rows, in the different phases of the vegetative development of the plant (from the first weeks after sowing, up to the complete development of the plants and the first appearance of weeds).&lt;br /&gt;
&lt;br /&gt;
=Specification and contribution=&lt;br /&gt;
&lt;br /&gt;
[[File:wp3-37-1.jpg|frame|center|Training and non-real-time actions]]&lt;br /&gt;
&lt;br /&gt;
[[File:wp3-37-2.jpg|frame|center|Real-time actions]]&lt;br /&gt;
&lt;br /&gt;
=Design and Implementation=&lt;br /&gt;
&lt;br /&gt;
Single Shot Detector (SSD) is a family of deep learning algorithms prone to identify and to classify objects in images. SSD are composed by a neural network for classification (backbone) and by a set of convolutional layers (head) for the features extraction. In this case of study, two networks have been implemented: a custom Feature Pyramid Network and the classic YOLOv5&lt;br /&gt;
*	Feature Pyramid Network:&lt;br /&gt;
**	Grid Sizes: (4x4, 8x8, 16x16)&lt;br /&gt;
**	Priors Sizes: (1x1)&lt;br /&gt;
**	Input Size: (512x512)&lt;br /&gt;
**	Total params: 2.8 M&lt;br /&gt;
&lt;br /&gt;
*	YOLOv5n:&lt;br /&gt;
**	Input Size: (640x640)&lt;br /&gt;
**	Total params: 1.9 M&lt;/div&gt;</summary>
		<author><name>Grolleaue</name></author>
	</entry>
</feed>