<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>CNN, LSTM, Spatial Attention | Maaz Salman</title><link>https://maazsalman.com/tags/cnn-lstm-spatial-attention/</link><atom:link href="https://maazsalman.com/tags/cnn-lstm-spatial-attention/index.xml" rel="self" type="application/rss+xml"/><description>CNN, LSTM, Spatial Attention</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sat, 01 Jan 2022 00:00:00 +0000</lastBuildDate><item><title>Hybrid CNN-LSTM Model with Attention for Time Series Classification</title><link>https://maazsalman.com/project/hybrid-cnn-lstm-with-spatial-attention/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://maazsalman.com/project/hybrid-cnn-lstm-with-spatial-attention/</guid><description>&lt;h2 id="-project-overview">🔬 Project Overview&lt;/h2>
&lt;p>This documents the training and evaluation of a Hybrid CNN-LSTM Attention model for time series classification in a dataset. The model combines convolutional neural networks (CNNs) for feature extraction, long short-term memory (LSTM) networks for sequential modeling, and attention mechanisms to focus on important parts of the sequence. The goal is to classify sequences into different classes based on the provided normalized time-series data.&lt;/p>
&lt;h2 id="-technical-details">⚙️ Technical Details&lt;/h2>
&lt;p>The hybrid model consists of:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>CNN Layers:&lt;/strong> Extract spatial features from the time series.&lt;/li>
&lt;li>Two convolutional layers (Conv1d) with ReLU activations.&lt;/li>
&lt;li>Two max-pooling layers (MaxPool1d) for downsampling.&lt;/li>
&lt;li>&lt;strong>Attention Mechanism:&lt;/strong> A spatial attention mechanism highlights the important parts of the sequence, enhancing the model&amp;rsquo;s ability to focus on critical segments of the input.&lt;/li>
&lt;li>&lt;strong>LSTM Layers:&lt;/strong> Process the sequential data to capture temporal dependencies.&lt;/li>
&lt;li>LSTM with 1 layers and a hidden size of 8.&lt;/li>
&lt;li>&lt;strong>Fully Connected (FC) Layer:&lt;/strong> Used to map the output of the LSTM to 4 class labels.&lt;/li>
&lt;/ul>
&lt;p>&lt;em>(For full source code, pin configurations, and implementation details, please view the GitHub repository using the button above).&lt;/em>&lt;/p></description></item></channel></rss>