BlockLune's Blog

Home Tags About |

Notes for Machine Learning Specialization: Vectorization

This is a note for the Machine Learning Specialization.

From:

P22 and P23

Here are our parameters and features:

So here .

In linear algebra, the index or the counting starts from 1.

In Python code, the index starts from 0.

w = np.array([1.0, 2.5, -3.3])
b = 4
x = np.array([10, 20, 30])

Without vectorization

f = w[0] * x[0] +
    w[1] * x[1] +
    w[2] * x[2] + b

Without vectorization + b

f = 0
for j in range(0, n):
    f = f + w[j] * x[j]
f = f + b

Vectorization

f = np.dot(w, x) + b

Vectorization’s benefits:

  • Shorter code
  • Faster running (parallel hardware)

P24

Ex.

Vectorization for Gradient descent

P55

For loops vs. vectorization

# For loops
x = np.array([200, 17])
W = np.array([[1, -3, 5],
              [-2, 4, -6]])
b = np.array([-1, 1, 2])

def dense(a_in, W, b):
    a_out = np.zeros(units)
    for j in range(units):
        w = W[:,j]
        z = np.dot(w, x) + b[j]
        a_out[j] = g(z)
    return a_out
# Vectorization
X = np.array([[200, 17]])  # 2d-array
W = np.array([[1, -3, 5],
              [-2, 4, -6]])
B = np.array([[-1, 1, 2]])  # 1*3 2d-array

def dense(A_in, W, B):
    Z = np.matmul(A_in, W) + B  # matrix multiplication
    A_out = g(Z)
    return A_out