<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://zengql97.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://zengql97.github.io/" rel="alternate" type="text/html" /><updated>2026-04-30T06:35:59-07:00</updated><id>https://zengql97.github.io/feed.xml</id><title type="html">Qingli Zeng</title><subtitle>Quantitative Marketing Researcher | Incoming Lecturer at Hebrew University of Jerusalem</subtitle><author><name>Qingli Zeng</name><email>zengql.1997@gmail.com</email></author><entry><title type="html">Introduction to Deep Learning</title><link href="https://zengql97.github.io/DLIntroduction/" rel="alternate" type="text/html" title="Introduction to Deep Learning" /><published>2023-12-04T00:00:00-08:00</published><updated>2023-12-04T00:00:00-08:00</updated><id>https://zengql97.github.io/DLIntroduction</id><content type="html" xml:base="https://zengql97.github.io/DLIntroduction/"><![CDATA[<h1 id="introduction-to-deep-learning">Introduction to Deep Learning</h1>

<h3 id="motivation-for-deep-learning"><strong>Motivation for Deep Learning</strong></h3>

<ul>
  <li>Eliminates the need for manual feature extraction.</li>
</ul>

<h3 id="why-deep-learning-is-prevalent-now"><strong>Why Deep Learning is Prevalent Now</strong></h3>

<ol>
  <li><strong>Big Data:</strong> Availability of large datasets.</li>
  <li><strong>Hardware Progress:</strong> Enhancements in hardware, especially those that are massively parallelizable.</li>
  <li><strong>Software Advancements:</strong> Development of efficient software, e.g., TensorFlow.</li>
</ol>

<h3 id="understanding-the-perceptron"><strong>Understanding the Perceptron</strong></h3>

<ul>
  <li>A perceptron is a single neuron used for forward propagation.</li>
  <li><strong>Components:</strong>
    <ol>
      <li><strong>Inputs</strong></li>
      <li><strong>Weights</strong></li>
      <li><strong>Summation</strong></li>
      <li><strong>Non-Linearity (Activation Functions):</strong>
        <ul>
          <li>Common functions include:
            <ol>
              <li><strong><code class="language-plaintext highlighter-rouge">tf.math.sigmoid</code></strong></li>
              <li><strong><code class="language-plaintext highlighter-rouge">tf.math.tanh</code></strong></li>
              <li><strong><code class="language-plaintext highlighter-rouge">tf.nn.relu</code></strong></li>
            </ol>
          </li>
          <li><strong>Purpose:</strong> To model real-world non-linearities.</li>
        </ul>
      </li>
      <li><strong>Output</strong></li>
    </ol>
  </li>
</ul>

<h3 id="from-perceptron-to-neural-network"><strong>From Perceptron to Neural Network</strong></h3>

<ul>
  <li><strong>Dense Layer:</strong> Basic building block (<strong><code class="language-plaintext highlighter-rouge">tf.keras.layers.Dense(units)</code></strong>).</li>
  <li><strong>Sequential Model:</strong> <strong><code class="language-plaintext highlighter-rouge">model = tf.keras.Sequential([layers])</code></strong>.</li>
  <li><strong>Concept:</strong> Stacking multiple layers forms a deep neural network.</li>
</ul>

<h3 id="training-a-deep-learning-model"><strong>Training a Deep Learning Model</strong></h3>

<ol>
  <li><strong>Loss Function:</strong>
    <ul>
      <li>Examples: <strong><code class="language-plaintext highlighter-rouge">cross_entropy_with_logits</code></strong> (for binary classification), Mean Squared Error (MSE).</li>
    </ul>
  </li>
  <li><strong>Loss Optimization (Backpropagation):</strong>
    <ul>
      <li><strong>Learning Rate:</strong> Balancing exploration and exploitation.</li>
      <li><strong>Gradient Descent Algorithms:</strong>
        <ol>
          <li>Stochastic Gradient Descent (SGD)</li>
          <li>Adam</li>
          <li>Adadelta</li>
          <li>Adagrad</li>
          <li>RMSProp</li>
        </ol>
      </li>
      <li><strong>Mini-batches:</strong> For efficient training.</li>
    </ul>
  </li>
  <li><strong>Avoiding Overfitting:</strong>
    <ul>
      <li><strong>Regularization Techniques:</strong>
        <ol>
          <li>Dropout</li>
          <li>Early Stopping</li>
        </ol>
      </li>
    </ul>
  </li>
</ol>]]></content><author><name>Qingli Zeng</name><email>zengql.1997@gmail.com</email></author><category term="machine learning" /><summary type="html"><![CDATA[Introduction to Deep Learning]]></summary></entry></feed>